rm(list = ls()) #Clean the entire environment
cat("\014") # clean console

Loading libraries and Reading the data set:

## set library
library(lavaan) 
library(semPlot) #for visualization
library(knitr)
library(dplyr)
library(lavaanPlot)
library(lm.beta)
library(rcompanion)   #Histogram and Normal Curve
library(nortest)      #Kolmogorov-Smirnov-Test
library(corrplot)     #correlation matrix plot
library(olsrr)        #VIF and Tolerance Values
library(pastecs)
library(REdaS)        #Bartelett's Test
library(psych)        # principal axis factoring 
library(naniar)       # for missing values analysis
library(RColorBrewer)
library(ggcorrplot)
library(psy)
setwd("/Users/mahmoudalkheja/Desktop/Advanced Data Driven Decision Making/Case Study III-20230420")
myData <- read.csv("Case Study III_Structural Equation Modeling.csv")
explanation <- read.csv("Variables and Labels_Galeries Lafayette.csv")
myData =data.frame(sapply(myData,function(x) ifelse((x==999),NA,as.numeric(x))))

Change 999 in the Data-set to NA’s:

Exploratory factor analyses

Conducting the confirmatory factor analyses run an exploratory factor analyses in R to get an initial idea by which dimensions customers perceive Gallerias Lafayette. In the file all 22 image items are proposed to measure different constructs or perceptual dimensions

Explore the data

head(myData)
##   Im1 Im2 Im3 Im4 Im5 Im6 Im7 Im8 Im9 Im10 Im11 Im12 Im13 Im14 Im15 Im16 Im17
## 1   7   7   4   4   4   7  NA  NA   6    7    7    6    4    7    5    5    4
## 2   4   4  NA   4   3   5   3   5   4    5    4    5    5    5    4    4    4
## 3   5   5   7   7   7   4  NA   6   6    7    7    7    7    7    6    6    6
## 4   5   5   5   5   5   4   4   4   4    4    6    6    6    5    6    5    4
## 5   4   4   4   3   5   4   4   4   3    6    5    4    4    6    4    3    3
## 6   4   4   5   5  NA   4   2   5   3    3    5    3    3   NA    3    3    5
##   Im18 Im19 Im20 Im21 Im22 C_CR1 C_CR2 C_CR3 C_CR4 C_REP1 C_REP2 C_REP3 COM_A1
## 1    4    5    4    5    4     6     6     6     6      5     NA      5      5
## 2    3    2    2    6    2     1    NA    NA    NA      3      4      4     NA
## 3    5    6    7    7    6    NA     1     1     2      5      5      5      4
## 4    4    5    4    3    2     2     4     5     3      4      5      5      5
## 5    3    3    3    3    3     2     7     2     1      4      5      5      4
## 6    5    3    4    5    5     5     7     5     2      5      5      5     NA
##   COM_A2 COM_A3 COM_A4 SAT_1 SAT_2 SAT_3 SAT_P1 SAT_P2 SAT_P3 SAT_P4 SAT_P5
## 1      6      6      4     6     6     6      6      7      7      6      6
## 2      1      1      1     4     4     4     NA     NA      5      5      5
## 3     NA      6      3     7     7     7      7      7      7      7      7
## 4      4      2      3     5     4    NA      4      4      4      3      3
## 5      4      3      3    NA     4     4      4      4      4      3      3
## 6      2     NA      2     4     5    NA      2      3      3      4      3
##   SAT_P6 TRU_1 TRU_2 TRU_3
## 1      5     7     7     7
## 2     NA     4    NA    NA
## 3      7    NA     6     6
## 4      3     2     5     4
## 5      3     4     4     4
## 6      4     5     4     6
summary(myData)
##       Im1             Im2             Im3             Im4             Im5      
##  Min.   :1.000   Min.   :1.000   Min.   :1.000   Min.   :1.000   Min.   :1.00  
##  1st Qu.:4.000   1st Qu.:4.000   1st Qu.:4.000   1st Qu.:4.000   1st Qu.:4.00  
##  Median :5.000   Median :5.000   Median :5.000   Median :5.000   Median :5.00  
##  Mean   :4.792   Mean   :4.854   Mean   :4.985   Mean   :5.002   Mean   :5.04  
##  3rd Qu.:6.000   3rd Qu.:6.000   3rd Qu.:6.000   3rd Qu.:6.000   3rd Qu.:6.00  
##  Max.   :7.000   Max.   :7.000   Max.   :7.000   Max.   :7.000   Max.   :7.00  
##  NA's   :14      NA's   :18      NA's   :20      NA's   :10      NA's   :29    
##       Im6             Im7            Im8             Im9             Im10      
##  Min.   :1.000   Min.   :2.00   Min.   :1.000   Min.   :1.000   Min.   :2.000  
##  1st Qu.:5.000   1st Qu.:5.00   1st Qu.:6.000   1st Qu.:4.000   1st Qu.:6.000  
##  Median :6.000   Median :6.00   Median :6.000   Median :5.000   Median :6.000  
##  Mean   :5.824   Mean   :5.75   Mean   :5.996   Mean   :5.076   Mean   :6.102  
##  3rd Qu.:7.000   3rd Qu.:7.00   3rd Qu.:7.000   3rd Qu.:6.000   3rd Qu.:7.000  
##  Max.   :7.000   Max.   :7.00   Max.   :7.000   Max.   :7.000   Max.   :7.000  
##  NA's   :9       NA's   :26     NA's   :6       NA's   :16      NA's   :6      
##       Im11            Im12            Im13            Im14      
##  Min.   :1.000   Min.   :1.000   Min.   :1.000   Min.   :1.000  
##  1st Qu.:5.000   1st Qu.:5.000   1st Qu.:5.000   1st Qu.:6.000  
##  Median :6.000   Median :6.000   Median :6.000   Median :6.000  
##  Mean   :5.654   Mean   :5.665   Mean   :5.444   Mean   :6.144  
##  3rd Qu.:6.000   3rd Qu.:6.000   3rd Qu.:6.000   3rd Qu.:7.000  
##  Max.   :7.000   Max.   :7.000   Max.   :7.000   Max.   :7.000  
##  NA's   :12      NA's   :21      NA's   :15      NA's   :24     
##       Im15            Im16           Im17            Im18            Im19      
##  Min.   :1.000   Min.   :1.00   Min.   :1.000   Min.   :1.000   Min.   :1.000  
##  1st Qu.:4.000   1st Qu.:4.00   1st Qu.:4.000   1st Qu.:4.000   1st Qu.:4.000  
##  Median :5.000   Median :5.00   Median :5.000   Median :5.000   Median :5.000  
##  Mean   :5.098   Mean   :5.13   Mean   :5.018   Mean   :4.571   Mean   :5.148  
##  3rd Qu.:6.000   3rd Qu.:6.00   3rd Qu.:6.000   3rd Qu.:6.000   3rd Qu.:6.000  
##  Max.   :7.000   Max.   :7.00   Max.   :7.000   Max.   :7.000   Max.   :7.000  
##  NA's   :12      NA's   :24     NA's   :12      NA's   :28      NA's   :12     
##       Im20            Im21            Im22           C_CR1      
##  Min.   :1.000   Min.   :1.000   Min.   :1.000   Min.   :1.000  
##  1st Qu.:4.000   1st Qu.:4.000   1st Qu.:3.000   1st Qu.:1.000  
##  Median :5.000   Median :5.000   Median :4.000   Median :2.000  
##  Mean   :4.669   Mean   :5.135   Mean   :4.289   Mean   :2.674  
##  3rd Qu.:6.000   3rd Qu.:6.000   3rd Qu.:6.000   3rd Qu.:4.000  
##  Max.   :7.000   Max.   :7.000   Max.   :7.000   Max.   :7.000  
##  NA's   :9       NA's   :5       NA's   :17      NA's   :20     
##      C_CR2           C_CR3           C_CR4           C_REP1          C_REP2    
##  Min.   :1.000   Min.   :1.000   Min.   :1.000   Min.   :1.000   Min.   :1.00  
##  1st Qu.:3.000   1st Qu.:1.000   1st Qu.:1.000   1st Qu.:4.000   1st Qu.:4.00  
##  Median :5.000   Median :3.000   Median :2.000   Median :4.000   Median :5.00  
##  Mean   :4.616   Mean   :3.271   Mean   :2.796   Mean   :4.281   Mean   :4.51  
##  3rd Qu.:6.000   3rd Qu.:5.000   3rd Qu.:4.000   3rd Qu.:5.000   3rd Qu.:5.00  
##  Max.   :7.000   Max.   :7.000   Max.   :7.000   Max.   :5.000   Max.   :5.00  
##  NA's   :30      NA's   :6       NA's   :10      NA's   :5       NA's   :16    
##      C_REP3          COM_A1          COM_A2         COM_A3          COM_A4     
##  Min.   :1.000   Min.   :1.000   Min.   :1.00   Min.   :1.000   Min.   :1.000  
##  1st Qu.:4.000   1st Qu.:4.000   1st Qu.:3.00   1st Qu.:2.000   1st Qu.:2.000  
##  Median :5.000   Median :4.000   Median :4.00   Median :3.000   Median :3.000  
##  Mean   :4.682   Mean   :4.302   Mean   :3.88   Mean   :3.536   Mean   :3.456  
##  3rd Qu.:5.000   3rd Qu.:5.000   3rd Qu.:5.00   3rd Qu.:5.000   3rd Qu.:5.000  
##  Max.   :5.000   Max.   :7.000   Max.   :7.00   Max.   :7.000   Max.   :7.000  
##  NA's   :18      NA's   :14      NA's   :11     NA's   :18      NA's   :9      
##      SAT_1           SAT_2           SAT_3          SAT_P1          SAT_P2     
##  Min.   :2.000   Min.   :1.000   Min.   :1.00   Min.   :1.000   Min.   :1.000  
##  1st Qu.:5.000   1st Qu.:5.000   1st Qu.:5.00   1st Qu.:5.000   1st Qu.:5.000  
##  Median :6.000   Median :6.000   Median :6.00   Median :6.000   Median :6.000  
##  Mean   :5.349   Mean   :5.481   Mean   :5.45   Mean   :5.424   Mean   :5.486  
##  3rd Qu.:6.000   3rd Qu.:6.000   3rd Qu.:6.00   3rd Qu.:6.000   3rd Qu.:6.000  
##  Max.   :7.000   Max.   :7.000   Max.   :7.00   Max.   :7.000   Max.   :7.000  
##  NA's   :5       NA's   :10      NA's   :40     NA's   :8       NA's   :16     
##      SAT_P3          SAT_P4          SAT_P5         SAT_P6          TRU_1      
##  Min.   :1.000   Min.   :2.000   Min.   :1.00   Min.   :1.000   Min.   :1.000  
##  1st Qu.:5.000   1st Qu.:5.000   1st Qu.:4.00   1st Qu.:5.000   1st Qu.:3.000  
##  Median :6.000   Median :6.000   Median :6.00   Median :6.000   Median :5.000  
##  Mean   :5.406   Mean   :5.535   Mean   :5.29   Mean   :5.635   Mean   :4.368  
##  3rd Qu.:6.000   3rd Qu.:6.000   3rd Qu.:6.00   3rd Qu.:7.000   3rd Qu.:5.000  
##  Max.   :7.000   Max.   :7.000   Max.   :7.00   Max.   :7.000   Max.   :7.000  
##  NA's   :6       NA's   :13      NA's   :8      NA's   :13      NA's   :28     
##      TRU_2           TRU_3      
##  Min.   :1.000   Min.   :1.000  
##  1st Qu.:4.000   1st Qu.:5.000  
##  Median :5.000   Median :6.000  
##  Mean   :5.121   Mean   :5.461  
##  3rd Qu.:6.000   3rd Qu.:6.000  
##  Max.   :7.000   Max.   :7.000  
##  NA's   :33      NA's   :28
dim(myData)
## [1] 553  45

Our dataset consists of 553 observations and 45 features, but it exhibits missing values in certain portions. Additionally, all questions (Image 1 to 22) in the dataset have been scaled on a 7-point scale.

gg_miss_var(myData)

For exploratory factor analysis: we only consider variables image1 to image22, and we will use listwise deletion to handle missing data before starting.

image <- myData[,c(1:22)]
miss_var_summary(image)
## # A tibble: 22 × 3
##    variable n_miss pct_miss
##    <chr>     <int>    <dbl>
##  1 Im5          29     5.24
##  2 Im18         28     5.06
##  3 Im7          26     4.70
##  4 Im14         24     4.34
##  5 Im16         24     4.34
##  6 Im12         21     3.80
##  7 Im3          20     3.62
##  8 Im2          18     3.25
##  9 Im22         17     3.07
## 10 Im9          16     2.89
## # ℹ 12 more rows
image <- na.omit(image)
dim(image)
## [1] 385  22

385 observation after removing the missing values.

Normality assumption

# histograms 
par(mfrow = c(3, 3))
for (i in colnames(image)) {
  plotNormalHistogram(image[,i], main = paste("Frequency Distribution of", i))
}

lillie.test(image$Im1)   # Kolmogorov-Smirnov-Test for normality 
## 
##  Lilliefors (Kolmogorov-Smirnov) normality test
## 
## data:  image$Im1
## D = 0.17221, p-value < 2.2e-16
lillie.test(image$Im22)   # Kolmogorov-Smirnov-Test for normality 
## 
##  Lilliefors (Kolmogorov-Smirnov) normality test
## 
## data:  image$Im22
## D = 0.15889, p-value < 2.2e-16

Upon examining the histograms of the data, it appears that the normality assumption is not met. Specifically, the Kolmogorov-Smirnov test for normality on samples (Im1, Im22) yielded a small p-value, indicating that the null hypothesis that the sample comes from a normal distribution can be rejected.

Correlation matrix:

Matrix = cor(image, use="complete.obs") #We create a matrix.
ggcorrplot(round(as.matrix(Matrix), 2),
           method = "square", 
           type = "lower", 
           show.diag = FALSE,
           lab = TRUE, lab_col = "black", hc.order = T, lab_size = 2)

Given that most of the correlation coefficients among the variables exceed 0.3, it appears that there is a significant degree of correlation between the variables. As a result, factor analysis can be an appropriate method to extract underlying factors from these correlated variables.

Check adequacy of correlation matrix.

KMOTEST=KMOS(image)
KMOTEST
## 
## Kaiser-Meyer-Olkin Statistics
## 
## Call: KMOS(x = image)
## 
## Measures of Sampling Adequacy (MSA):
##       Im1       Im2       Im3       Im4       Im5       Im6       Im7       Im8 
## 0.8244624 0.8224640 0.8640362 0.8542604 0.9546668 0.8224827 0.8448231 0.9300079 
##       Im9      Im10      Im11      Im12      Im13      Im14      Im15      Im16 
## 0.9380091 0.8285789 0.9113882 0.8789413 0.8722220 0.8267452 0.9647563 0.9092200 
##      Im17      Im18      Im19      Im20      Im21      Im22 
## 0.8644991 0.8550678 0.9400714 0.8266391 0.9149654 0.8793157 
## 
## KMO-Criterion: 0.8770975
sort(KMOTEST$MSA)
##       Im2       Im6       Im1      Im20      Im14      Im10       Im7       Im4 
## 0.8224640 0.8224827 0.8244624 0.8266391 0.8267452 0.8285789 0.8448231 0.8542604 
##      Im18       Im3      Im17      Im13      Im12      Im22      Im16      Im11 
## 0.8550678 0.8640362 0.8644991 0.8722220 0.8789413 0.8793157 0.9092200 0.9113882 
##      Im21       Im8       Im9      Im19       Im5      Im15 
## 0.9149654 0.9300079 0.9380091 0.9400714 0.9546668 0.9647563

KMO ( Kaiser-Meyer-Olkin ) test gives us a KMO - criterion of 0.88 which is good we need more than 0.6 for a good factor analysis. We see no gap between the variables they are all very close in KMO.

bart_spher(image)
##  Bartlett's Test of Sphericity
## 
## Call: bart_spher(x = image)
## 
##      X2 = 6451.238
##      df = 231
## p-value < 2.22e-16

In the Bartlett’s Test of Sphericity the small p_value indicate strong evidence against the null hypothesis that CM equals to identity matrix in other word indicating that the variables in the data set are significantly correlated

Principal axes factoring

# Run factor analysis with no rotation
# ?fa  # details on the function

fa_0 <- fa(image, 
           nfactors = ncol(image), 
           rotate = "none")

# Look at communalities
sort(fa_0$communalities)
##      Im11       Im9       Im5      Im21      Im19      Im15      Im16       Im8 
## 0.6017430 0.6131069 0.6413666 0.7117001 0.7290241 0.7485245 0.7604446 0.7936975 
##      Im12      Im18      Im13      Im20      Im22       Im6       Im7      Im10 
## 0.8132541 0.8294448 0.8417272 0.8564198 0.8745729 0.8773807 0.8937851 0.9167615 
##       Im2       Im3      Im17       Im4      Im14       Im1 
## 0.9264859 0.9307854 0.9352630 0.9664673 0.9711077 0.9761381
total_var_explained_paf <- data.frame(
  Factor_n = as.factor(1:length(fa_0$e.values)), 
  Eigenvalue = fa_0$e.values,
  Variance = fa_0$e.values/(ncol(image))*100,
  Cum_var = cumsum(fa_0$e.values/ncol(image))
  )
total_var_explained_paf
##    Factor_n Eigenvalue   Variance   Cum_var
## 1         1 8.97758636 40.8072107 0.4080721
## 2         2 2.46726381 11.2148355 0.5202205
## 3         3 1.56195916  7.0998144 0.5912186
## 4         4 1.45683885  6.6219948 0.6574386
## 5         5 1.24785174  5.6720533 0.7141591
## 6         6 1.14733750  5.2151705 0.7663108
## 7         7 0.81009930  3.6822696 0.8031335
## 8         8 0.71161301  3.2346046 0.8354795
## 9         9 0.56785521  2.5811600 0.8612911
## 10       10 0.45684420  2.0765645 0.8820568
## 11       11 0.36139965  1.6427257 0.8984840
## 12       12 0.33234747  1.5106703 0.9135907
## 13       13 0.29499718  1.3408963 0.9269997
## 14       14 0.28351700  1.2887137 0.9398868
## 15       15 0.24936387  1.1334721 0.9512216
## 16       16 0.22811058  1.0368663 0.9615902
## 17       17 0.20225224  0.9193284 0.9707835
## 18       18 0.18624143  0.8465520 0.9792490
## 19       19 0.15737216  0.7153280 0.9864023
## 20       20 0.11623773  0.5283533 0.9916858
## 21       21 0.10167221  0.4621464 0.9963073
## 22       22 0.08123935  0.3692698 1.0000000

The first factor explains 40.80% of the total variance, indicating a relatively strong ability to capture the underlying structure of the data. Additionally, the first six factors have Eigenvalues greater than 1, collectively accounting for 76.64% of the total variance. However, the seventh and the eighth factors have an Eigenvalue less than 1. BY adding them the explained total variance will be 83.54%.

# Scree plot
ggplot(total_var_explained_paf, aes(x = Factor_n, y = Eigenvalue, group = 1)) + 
  geom_point() + geom_line() +
  xlab("Number of factors") +
  ylab("Initial eigenvalue") +
  labs( title = "Scree Plot") +
  geom_hline(yintercept= 1, linetype="dashed", color = "red")

According to the Kaiser criterion, we should extract factors with eigenvalues larger than 1, which would suggest retaining six factors. However, it is worth noting that the eigenvalue for the 7th factor is close to 1.

Factor rotation and factor interpretation.

fa_paf_6f <- fa(
  image,
  fm = "pa",              # principal axis factoring
  rotate = "varimax",     # varimax rotation
  nfactors = 6            # 6 factors
  )
communalities_6f <- data.frame(sort(fa_paf_6f$communality))
communalities_6f
##      sort.fa_paf_6f.communality.
## Im11                   0.4483653
## Im9                    0.4631735
## Im16                   0.4655592
## Im19                   0.5230760
## Im5                    0.5464495
## Im18                   0.5766027
## Im15                   0.6320150
## Im21                   0.6485197
## Im13                   0.6995376
## Im17                   0.7003475
## Im6                    0.7061967
## Im12                   0.7291158
## Im8                    0.7427949
## Im14                   0.7587075
## Im2                    0.7621640
## Im10                   0.7762828
## Im7                    0.7792579
## Im20                   0.7839807
## Im22                   0.7907061
## Im1                    0.8373896
## Im3                    0.8552267
## Im4                    0.9171190
print(fa_paf_6f$loadings, cutoff=0.3, sort = TRUE)
## 
## Loadings:
##      PA2    PA5    PA1    PA4    PA3    PA6   
## Im6   0.622                              0.544
## Im7   0.734                              0.457
## Im8   0.817                                   
## Im10  0.799                                   
## Im14  0.791                                   
## Im1          0.855                            
## Im2          0.826                            
## Im15         0.598                            
## Im3                 0.832                     
## Im4                 0.874                     
## Im5                 0.640                     
## Im11                       0.591              
## Im12                       0.792              
## Im13                       0.731              
## Im20                              0.841       
## Im21                              0.734       
## Im22                              0.788       
## Im17         0.349  0.308  0.385         0.547
## Im18                       0.347         0.513
## Im9   0.363                0.319         0.424
## Im16         0.485  0.373                     
## Im19         0.464  0.403                     
## 
##                  PA2   PA5   PA1   PA4   PA3   PA6
## SS loadings    3.348 2.847 2.761 2.382 2.318 1.487
## Proportion Var 0.152 0.129 0.125 0.108 0.105 0.068
## Cumulative Var 0.152 0.282 0.407 0.515 0.621 0.688

Based on the loadings of the variables and their relationship to the factors, it appears that the 6-factor model does not fit the data well. Many variables seem to have loadings on multiple factors such as lm 17 and lm9, which can indicate a lack of discriminant validity. Therefore, it may be necessary to explore other solutions.

One possible solution is to consider a 7-factor model, as suggested by the Kaiser criterion. This would involve extracting an additional factor and re-analyzing the data to assess the fit of the model.

fa_paf_7f <- fa(
  image,
  fm = "pa",              # principal axis factoring
  rotate = "varimax",     # varimax rotation
  nfactors = 7            # 7 factors
  )
communalities_7f <- data.frame(sort(fa_paf_7f$communality),
                               sort(fa_paf_6f$communality))
communalities_7f
##      sort.fa_paf_7f.communality. sort.fa_paf_6f.communality.
## Im11                   0.4449125                   0.4483653
## Im9                    0.4559063                   0.4631735
## Im16                   0.4734149                   0.4655592
## Im19                   0.5297997                   0.5230760
## Im5                    0.5437903                   0.5464495
## Im15                   0.6317519                   0.5766027
## Im21                   0.6494567                   0.6320150
## Im13                   0.6999080                   0.6485197
## Im8                    0.7217525                   0.6995376
## Im18                   0.7256024                   0.7003475
## Im6                    0.7644746                   0.7061967
## Im2                    0.7824128                   0.7291158
## Im22                   0.7879817                   0.7427949
## Im20                   0.7883146                   0.7587075
## Im14                   0.8099945                   0.7621640
## Im12                   0.8302990                   0.7762828
## Im7                    0.8473835                   0.7792579
## Im1                    0.8575489                   0.7839807
## Im3                    0.8613613                   0.7907061
## Im10                   0.8910746                   0.8373896
## Im17                   0.9223516                   0.8552267
## Im4                    0.9700881                   0.9171190
print(fa_paf_7f$loadings, cutoff=0.3, sort=TRUE)
## 
## Loadings:
##      PA5   PA1   PA3   PA2   PA4   PA7   PA6  
## Im1  0.860                                    
## Im2  0.833                                    
## Im15 0.586                                    
## Im3        0.828                              
## Im4        0.903                              
## Im5        0.628                              
## Im20             0.846                        
## Im21             0.734                        
## Im22             0.784                        
## Im8                    0.630       0.509      
## Im10                   0.874                  
## Im14                   0.810                  
## Im11                         0.579            
## Im12                         0.853            
## Im13                         0.716            
## Im6                                0.826      
## Im7                    0.322       0.836      
## Im17                                     0.822
## Im18                                     0.731
## Im9                          0.332 0.454      
## Im16 0.458 0.339                              
## Im19 0.435 0.368                              
## 
##                  PA5   PA1   PA3   PA2   PA4   PA7   PA6
## SS loadings    2.689 2.627 2.306 2.301 2.211 2.174 1.682
## Proportion Var 0.122 0.119 0.105 0.105 0.101 0.099 0.076
## Cumulative Var 0.122 0.242 0.346 0.451 0.552 0.650 0.727

After implementing a 7-factor model, it appears that the results have improved. However, there are still some variables that have loadings on multiple factors such as lm 8,lm 16 and lm 19, which suggests a lack of discriminant validity. Additionally, there are some variables with low loading values such as lm15 and lm 11, indicating that they may not be contributing much to the underlying factors. Let’s try to interpret the results :

  1. What do GLB represent from your point of view? Large Assortment

  2. What do GLB represent from your point of view? Assortment Variety

  3. What do GLB represent from your point of view? Artistic Decoration of Sales Area

  4. What do GLB represent from your point of view? Creative Decoration of Sales Area

  5. What do GLB represent from your point of view? Appealing Arrangement of Shop Windows

  6. What do GLB represent from your point of view? France

  7. What do GLB represent from your point of view? French Savoir-vivre

  8. What do GLB represent from your point of view? Expertise in French Traditional Cuisine

  9. What do GLB represent from your point of view? French Fashion

  10. What do GLB represent from your point of view? Gourmet Food

  11. What do GLB represent from your point of view? High-quality Cosmetics

  12. What do GLB represent from your point of view? Luxury brands

  13. What do GLB represent from your point of view? Up to date Designer Brands

  14. What do GLB represent from your point of view? Gourmet specialties

  15. What do GLB represent from your point of view? Professional Selection of Brands

  16. What do GLB represent from your point of view? Professional Appearance Towards Customers

  17. What do GLB represent from your point of view? Are Trendy

  18. What do GLB represent from your point of view? Are Hip

  19. What do GLB represent from your point of view? Professional Organization

  20. What do GLB represent from your point of view? Relaxing Shopping

  21. What do GLB represent from your point of view? A Great Place to Stroll

  22. What do GLB represent from your point of view? Intimate Shop Atmosphere

Factors interpretation

  • PA5 -> 1,2,15,16,19 –> Variety ( Im15 has a low loading seem to be not relevant and more about professional,lm16 and lm19 have loadings on PA1 as well and it is about professiona)

  • PA1 –> 3,4,5,16,19 –> Decoration

  • PA3 –> 20,21,22 –> Atmosphere or Ambiance

  • PA2 –> 8,10,14 –> Food or Cuisine ( Im 8 has as well a loading on factor 7 )

  • PA4 –> 9,11,12,13 –> Brand (Im11 has a low loading )

  • PA7 –>6-7-8-9 –>Related to France (lm8 and lm9 have loadings on other factors)

  • PA6 –> 17 , 18 –> Fashion or mode.

fa_paf_8f <- fa(
  image,
  fm = "pa",              # principal axis factoring
  rotate = "varimax",     # varimax rotation
  nfactors = 8            # 8 factors
  )
communalities_8f <- data.frame(sort(fa_paf_8f$communality),
                               sort(fa_paf_7f$communality),
                               sort(fa_paf_6f$communality))
communalities_8f
##      sort.fa_paf_8f.communality. sort.fa_paf_7f.communality.
## Im11                   0.4457585                   0.4449125
## Im9                    0.4555544                   0.4559063
## Im5                    0.5764184                   0.4734149
## Im19                   0.6245858                   0.5297997
## Im15                   0.6464069                   0.5437903
## Im21                   0.6464327                   0.6317519
## Im13                   0.6993955                   0.6494567
## Im8                    0.7249609                   0.6999080
## Im18                   0.7554077                   0.7217525
## Im6                    0.7604978                   0.7256024
## Im16                   0.7848879                   0.7644746
## Im22                   0.7849973                   0.7824128
## Im20                   0.8085432                   0.7879817
## Im2                    0.8155382                   0.7883146
## Im12                   0.8373341                   0.8099945
## Im7                    0.8474823                   0.8302990
## Im14                   0.8534420                   0.8473835
## Im3                    0.8559235                   0.8575489
## Im10                   0.8980931                   0.8613613
## Im17                   0.9053933                   0.8910746
## Im1                    0.9324261                   0.9223516
## Im4                    0.9691025                   0.9700881
##      sort.fa_paf_6f.communality.
## Im11                   0.4483653
## Im9                    0.4631735
## Im5                    0.4655592
## Im19                   0.5230760
## Im15                   0.5464495
## Im21                   0.5766027
## Im13                   0.6320150
## Im8                    0.6485197
## Im18                   0.6995376
## Im6                    0.7003475
## Im16                   0.7061967
## Im22                   0.7291158
## Im20                   0.7427949
## Im2                    0.7587075
## Im12                   0.7621640
## Im7                    0.7762828
## Im14                   0.7792579
## Im3                    0.7839807
## Im10                   0.7907061
## Im17                   0.8373896
## Im1                    0.8552267
## Im4                    0.9171190
print(fa_paf_8f$loadings, cutoff=0.3, sort=TRUE)
## 
## Loadings:
##      PA1    PA3    PA4    PA7    PA2    PA5    PA6    PA8   
## Im3   0.812                                                 
## Im4   0.892                                                 
## Im5   0.648                                                 
## Im20         0.858                                          
## Im21         0.733                                          
## Im22         0.782                                          
## Im11                0.579                                   
## Im12                0.860                                   
## Im13                0.720                                   
## Im6                        0.828                            
## Im7                        0.843  0.302                     
## Im8                        0.538  0.587                     
## Im10                              0.867                     
## Im14                              0.832                     
## Im1                                      0.876              
## Im2                                      0.823              
## Im17                                            0.808       
## Im18                                            0.755       
## Im16                                                   0.758
## Im19  0.302                                            0.548
## Im9                 0.334  0.460                            
## Im15                                     0.466         0.392
## 
##                  PA1   PA3   PA4   PA7   PA2   PA5   PA6   PA8
## SS loadings    2.477 2.345 2.262 2.260 2.162 2.107 1.661 1.355
## Proportion Var 0.113 0.107 0.103 0.103 0.098 0.096 0.075 0.062
## Cumulative Var 0.113 0.219 0.322 0.425 0.523 0.619 0.694 0.756

Based on our initial analysis, an 8-factor solution appears to be a good fit for our data. However, to further refine our results, we will re-do the analysis after removing variables 8,9 and 15 since they have low loading on two factors.

fa_paf_8f_n <- fa(
  image[,-c(8,9,15)],
  fm = "pa",              # principal axis factoring
  rotate = "varimax",     # varimax rotation
  nfactors = 8            # 8 factors
  )
## maximum iteration exceeded
## Warning in fa.stats(r = r, f = f, phi = phi, n.obs = n.obs, np.obs = np.obs, :
## The estimated weights for the factor scores are probably incorrect.  Try a
## different factor score estimation method.
## Warning in fac(r = r, nfactors = nfactors, n.obs = n.obs, rotate = rotate, : An
## ultra-Heywood case was detected.  Examine the results carefully
communalities_8f_n <- data.frame(sort(fa_paf_8f_n$communality))
                            
communalities_8f_n
##      sort.fa_paf_8f_n.communality.
## Im11                     0.4329273
## Im5                      0.5793745
## Im21                     0.6382830
## Im16                     0.6436939
## Im13                     0.6968226
## Im18                     0.7078105
## Im19                     0.7275401
## Im7                      0.7395077
## Im14                     0.7737207
## Im2                      0.7782390
## Im22                     0.7854382
## Im20                     0.8153635
## Im3                      0.8550184
## Im12                     0.8670082
## Im6                      0.9011484
## Im17                     0.9643328
## Im4                      0.9682718
## Im1                      0.9792472
## Im10                     1.0096155
print(fa_paf_8f_n$loadings, cutoff=0.3, sort=TRUE)
## 
## Loadings:
##      PA1   PA3   PA4   PA5   PA2   PA7   PA6   PA8  
## Im3  0.813                                          
## Im4  0.892                                          
## Im5  0.653                                          
## Im20       0.864                                    
## Im21       0.728                                    
## Im22       0.789                                    
## Im11             0.566                              
## Im12             0.881                              
## Im13             0.720                              
## Im1                    0.896                        
## Im2                    0.788                        
## Im10                         0.936                  
## Im14                         0.762                  
## Im6                                0.911            
## Im7                          0.318 0.761            
## Im17                                     0.855      
## Im18                                     0.723      
## Im16                                           0.651
## Im19                                           0.664
## 
##                  PA1   PA3   PA4   PA5   PA2   PA7   PA6   PA8
## SS loadings    2.427 2.296 2.083 1.791 1.778 1.682 1.613 1.193
## Proportion Var 0.128 0.121 0.110 0.094 0.094 0.089 0.085 0.063
## Cumulative Var 0.128 0.249 0.358 0.453 0.546 0.635 0.720 0.782
fa.diagram(fa_paf_8f_n)

Factors interpretation

  • PA1 –> 3,4,5 –> Decoration

  • PA3 –> 20,21,22 –> Atmosphere or Ambiance

  • PA4 –> 11,12,13 –> Brand

  • PA5 -> 1,2 –> Variety

  • PA2 –> 10,14 –> Food or Cuisine

  • PA7 –> 6-7 –> Related to France

  • PA6 -> 17-18 ->Fashion or Mode

  • PA8 -> 16-19 ->Professionalism

PCA

# run factor analysis
fa_pca <- principal(
  image, 
  rotate="none", 
  scores=TRUE)

# data frame with eigenvalues
pca <- data.frame(
  Factor_n =  as.factor(1:length(fa_pca$values)), 
  Eigenvalue_PCA = fa_pca$values,
  Eigenvalue_PAF = fa_0 $e.values
  )
pca
##    Factor_n Eigenvalue_PCA Eigenvalue_PAF
## 1         1     8.97758636     8.97758636
## 2         2     2.46726381     2.46726381
## 3         3     1.56195916     1.56195916
## 4         4     1.45683885     1.45683885
## 5         5     1.24785174     1.24785174
## 6         6     1.14733750     1.14733750
## 7         7     0.81009930     0.81009930
## 8         8     0.71161301     0.71161301
## 9         9     0.56785521     0.56785521
## 10       10     0.45684420     0.45684420
## 11       11     0.36139965     0.36139965
## 12       12     0.33234747     0.33234747
## 13       13     0.29499718     0.29499718
## 14       14     0.28351700     0.28351700
## 15       15     0.24936387     0.24936387
## 16       16     0.22811058     0.22811058
## 17       17     0.20225224     0.20225224
## 18       18     0.18624143     0.18624143
## 19       19     0.15737216     0.15737216
## 20       20     0.11623773     0.11623773
## 21       21     0.10167221     0.10167221
## 22       22     0.08123935     0.08123935

The eigenvalues obtained from performing PAF and PCA factor analyses exhibit a high degree of similarity and often yield identical results. This is a common occurrence in these types of analyses.

fa_pca_7f <- principal(
  nfactors = 7,
  image, 
  rotate="varimax", 
  scores=TRUE           # If TRUE, find component scores
  )

pca_communalities_7f <- data.frame(fa_pca_7f$communality,
                                   fa_paf_7f$communality)
pca_communalities_7f
##      fa_pca_7f.communality fa_paf_7f.communality
## Im1              0.8940147             0.8575489
## Im2              0.8866154             0.7824128
## Im3              0.8813179             0.8613613
## Im4              0.9113509             0.9700881
## Im5              0.7762818             0.5437903
## Im6              0.8218616             0.7644746
## Im7              0.8441562             0.8473835
## Im8              0.8073195             0.7217525
## Im9              0.6249270             0.4559063
## Im10             0.8388964             0.8910746
## Im11             0.6909827             0.4449125
## Im12             0.8000487             0.8302990
## Im13             0.7632203             0.6999080
## Im14             0.8201246             0.8099945
## Im15             0.7087339             0.6317519
## Im16             0.7257516             0.4734149
## Im17             0.8553069             0.9223516
## Im18             0.8160203             0.7256024
## Im19             0.7158130             0.5297997
## Im20             0.8552184             0.7883146
## Im21             0.7951449             0.6494567
## Im22             0.8358300             0.7879817

As expected, PCA often yields higher communalities estimates than PAF.

print(fa_pca_7f$loadings, cutoff=0.3, sort=TRUE)
## 
## Loadings:
##      RC7    RC1    RC5    RC3    RC4    RC2    RC6   
## Im8   0.716                              0.497       
## Im10  0.830                                          
## Im14  0.808                                          
## Im1          0.874                                   
## Im2          0.890                                   
## Im15         0.632                              0.301
## Im3                 0.830                            
## Im4                 0.853                            
## Im5                 0.813                            
## Im20                       0.886                     
## Im21                       0.835                     
## Im22                       0.811                     
## Im11                              0.776              
## Im12                              0.822              
## Im13                              0.751              
## Im6                                      0.849       
## Im7   0.375                              0.814       
## Im9                               0.371  0.600       
## Im17                                            0.780
## Im18                                            0.793
## Im19  0.438  0.399                              0.502
## Im16  0.483  0.446                              0.453
## 
##                  RC7   RC1   RC5   RC3   RC4   RC2   RC6
## SS loadings    2.723 2.702 2.642 2.550 2.464 2.400 2.189
## Proportion Var 0.124 0.123 0.120 0.116 0.112 0.109 0.099
## Cumulative Var 0.124 0.247 0.367 0.483 0.595 0.704 0.803

Same results from PAF . After implementing a 7-factor model, there are still some variables that have loadings on multiple factors such as lm 8,lm 16 and lm 19, thus we will try with 8 factors :

fa_pca_8f <- principal(
  nfactors = 8,
  image, 
  rotate="varimax", 
  scores=TRUE           # If TRUE, find component scores
  )

pca_communalities_8f <- data.frame(fa_pca_8f$communality,
                                   fa_paf_8f$communality)
pca_communalities_8f
##      fa_pca_8f.communality fa_paf_8f.communality
## Im1              0.9238112             0.9324261
## Im2              0.9186596             0.8155382
## Im3              0.8851760             0.8559235
## Im4              0.9149950             0.9691025
## Im5              0.7946811             0.5764184
## Im6              0.8443973             0.7604978
## Im7              0.8561903             0.8474823
## Im8              0.8077646             0.7249609
## Im9              0.6580349             0.4555544
## Im10             0.9071324             0.8980931
## Im11             0.6910115             0.4457585
## Im12             0.8200642             0.8373341
## Im13             0.7659948             0.6993955
## Im14             0.9112282             0.8534420
## Im15             0.7363157             0.6464069
## Im16             0.8472172             0.7848879
## Im17             0.9050615             0.9053933
## Im18             0.9125390             0.7554077
## Im19             0.7846067             0.6245858
## Im20             0.8618400             0.8085432
## Im21             0.7976466             0.6464327
## Im22             0.8361820             0.7849973
print(fa_pca_8f$loadings, cutoff=0.3, sort=TRUE)
## 
## Loadings:
##      RC1    RC3    RC4    RC2    RC8    RC5    RC7    RC6   
## Im3   0.822                                                 
## Im4   0.845                                                 
## Im5   0.814                                                 
## Im20         0.887                                          
## Im21         0.835                                          
## Im22         0.811                                          
## Im11                0.768                                   
## Im12                0.841                                   
## Im13                0.758                                   
## Im8                        0.674  0.515                     
## Im10                       0.883                            
## Im14                       0.884                            
## Im6                               0.870                     
## Im7                        0.366  0.825                     
## Im9                 0.388         0.635                     
## Im1                                      0.869              
## Im2                                      0.891              
## Im15                                     0.520  0.469       
## Im16                                            0.822       
## Im19                                            0.723       
## Im17                                                   0.807
## Im18                                                   0.866
## 
##                  RC1   RC3   RC4   RC2   RC8   RC5   RC7   RC6
## SS loadings    2.571 2.563 2.497 2.439 2.421 2.216 1.865 1.808
## Proportion Var 0.117 0.116 0.113 0.111 0.110 0.101 0.085 0.082
## Cumulative Var 0.117 0.233 0.347 0.458 0.568 0.669 0.753 0.835

8-factor solution appears to be a good fit for our data. However, to further refine our results, we will re-do the analysis after removing variables 8,9 and 15 since they have low loading on two factors.

fa_pca_8f_n <- principal(
  nfactors = 8,
  image[,-c(8,9,15)], 
  rotate="varimax", 
  scores=TRUE           # If TRUE, find component scores
  )

pca_communalities_8f_n <- data.frame(fa_pca_8f_n$communality,
                                   fa_paf_8f_n$communality)
pca_communalities_8f_n
##      fa_pca_8f_n.communality fa_paf_8f_n.communality
## Im1                0.9347185               0.9792472
## Im2                0.9354354               0.7782390
## Im3                0.8854065               0.8550184
## Im4                0.9169606               0.9682718
## Im5                0.7988686               0.5793745
## Im6                0.9142252               0.9011484
## Im7                0.8881899               0.7395077
## Im10               0.9365857               1.0096155
## Im11               0.6879301               0.4329273
## Im12               0.8412917               0.8670082
## Im13               0.7938506               0.6968226
## Im14               0.9194940               0.7737207
## Im16               0.8522562               0.6436939
## Im17               0.9087821               0.9643328
## Im18               0.9176481               0.7078105
## Im19               0.8203932               0.7275401
## Im20               0.8636520               0.8153635
## Im21               0.7900684               0.6382830
## Im22               0.8384857               0.7854382
print(fa_pca_8f_n$loadings, cutoff=0.3, sort=TRUE)
## 
## Loadings:
##      RC1   RC3   RC4   RC5   RC2   RC8   RC6   RC7  
## Im3  0.827                                          
## Im4  0.850                                          
## Im5  0.818                                          
## Im20       0.891                                    
## Im21       0.833                                    
## Im22       0.821                                    
## Im11             0.758                              
## Im12             0.859                              
## Im13             0.781                              
## Im1                    0.866                        
## Im2                    0.892                        
## Im10                         0.892                  
## Im14                         0.869                  
## Im6                                0.917            
## Im7                          0.309 0.859            
## Im17                                     0.818      
## Im18                                     0.876      
## Im16                                           0.824
## Im19                                           0.751
## 
##                  RC1   RC3   RC4   RC5   RC2   RC8   RC6   RC7
## SS loadings    2.556 2.525 2.334 1.881 1.875 1.843 1.786 1.645
## Proportion Var 0.135 0.133 0.123 0.099 0.099 0.097 0.094 0.087
## Cumulative Var 0.135 0.267 0.390 0.489 0.588 0.685 0.779 0.865

Factors interpretation

  1. What do GLB represent from your point of view? Large Assortment

  2. What do GLB represent from your point of view? Assortment Variety

  3. What do GLB represent from your point of view? Artistic Decoration of Sales Area

  4. What do GLB represent from your point of view? Creative Decoration of Sales Area

  5. What do GLB represent from your point of view? Appealing Arrangement of Shop Windows

  6. What do GLB represent from your point of view? France

  7. What do GLB represent from your point of view? French Savoir-vivre

  8. What do GLB represent from your point of view? Expertise in French Traditional Cuisine

  9. What do GLB represent from your point of view? French Fashion

  10. What do GLB represent from your point of view? Gourmet Food

  11. What do GLB represent from your point of view? High-quality Cosmetics

  12. What do GLB represent from your point of view? Luxury brands

  13. What do GLB represent from your point of view? Up to date Designer Brands

  14. What do GLB represent from your point of view? Gourmet specialties

  15. What do GLB represent from your point of view? Professional Selection of Brands

  16. What do GLB represent from your point of view? Professional Appearance Towards Customers

  17. What do GLB represent from your point of view? Are Trendy

  18. What do GLB represent from your point of view? Are Hip

  19. What do GLB represent from your point of view? Professional Organization

  20. What do GLB represent from your point of view? Relaxing Shopping

  21. What do GLB represent from your point of view? A Great Place to Stroll

  22. What do GLB represent from your point of view? Intimate Shop Atmosphere

PFA

  • RC1 –> 3,4,5 –> Decoration

  • RC3 –> 20,21,22 –> Atmosphere or Ambiance

  • RC4 –> 11,12,13 –> Brand

  • RC5 -> 1,2 –> Variety

  • RC2 –> 10,14 –> Food or Cuisine

  • RC8 –> 6-7 –> Related to France

  • RC6 -> 17-18 ->Fashion or Mode

  • RC7 -> 16-19 ->Professionalism

PC

  • PA1 –> 3,4,5 –> Decoration

  • PA3 –> 20,21,22 –> Atmosphere or Ambiance

  • PA4 –> 11,12,13 –> Brand

  • PA5 -> 1,2 –> Variety

  • PA2 –> 10,14 –> Food or Cuisine

  • PA7 –> 6-7 –> Related to France

  • PA6 -> 17-18 ->Fashion or Mode

  • PA8 -> 16-19 ->Professionalism

Question 1

What are the dimensions by which Galeries Lafayette is perceived? Please explain your findings and rational for your final result.

Confirmatory Factor Analysis

model <- "
decoration=~ Im3+Im4+Im5
atmosphere=~ Im20+Im21+Im22
brand=~ Im11+Im12+Im13
variety=~ Im1 + Im2
cuisine=~ Im10+ Im14
france=~ Im6 + Im7
mode=~ Im17 + Im18
professionalism =~ Im16+ Im19"

fit <- cfa(model, myData, missing="ML")
 summary(fit, fit.measures=TRUE,standardized=TRUE)
## lavaan 0.6.15 ended normally after 107 iterations
## 
##   Estimator                                         ML
##   Optimization method                           NLMINB
##   Number of model parameters                        85
## 
##   Number of observations                           553
##   Number of missing patterns                        79
## 
## Model Test User Model:
##                                                       
##   Test statistic                               259.047
##   Degrees of freedom                               124
##   P-value (Chi-square)                           0.000
## 
## Model Test Baseline Model:
## 
##   Test statistic                              7474.765
##   Degrees of freedom                               171
##   P-value                                        0.000
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    0.982
##   Tucker-Lewis Index (TLI)                       0.975
##                                                       
##   Robust Comparative Fit Index (CFI)             0.981
##   Robust Tucker-Lewis Index (TLI)                0.974
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)             -12973.111
##   Loglikelihood unrestricted model (H1)     -12843.588
##                                                       
##   Akaike (AIC)                               26116.223
##   Bayesian (BIC)                             26483.028
##   Sample-size adjusted Bayesian (SABIC)      26213.200
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.044
##   90 Percent confidence interval - lower         0.037
##   90 Percent confidence interval - upper         0.052
##   P-value H_0: RMSEA <= 0.050                    0.886
##   P-value H_0: RMSEA >= 0.080                    0.000
##                                                       
##   Robust RMSEA                                   0.045
##   90 Percent confidence interval - lower         0.038
##   90 Percent confidence interval - upper         0.053
##   P-value H_0: Robust RMSEA <= 0.050             0.825
##   P-value H_0: Robust RMSEA >= 0.080             0.000
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.029
## 
## Parameter Estimates:
## 
##   Standard errors                             Standard
##   Information                                 Observed
##   Observed information based on                Hessian
## 
## Latent Variables:
##                      Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   decoration =~                                                           
##     Im3                 1.000                               1.236    0.937
##     Im4                 1.056    0.025   42.716    0.000    1.305    0.969
##     Im5                 0.818    0.034   23.815    0.000    1.011    0.760
##   atmosphere =~                                                           
##     Im20                1.000                               1.265    0.845
##     Im21                0.849    0.041   20.823    0.000    1.074    0.783
##     Im22                1.060    0.047   22.606    0.000    1.340    0.877
##   brand =~                                                                
##     Im11                1.000                               0.703    0.615
##     Im12                1.410    0.094   15.046    0.000    0.991    0.872
##     Im13                1.465    0.105   13.968    0.000    1.030    0.855
##   variety =~                                                              
##     Im1                 1.000                               1.305    0.980
##     Im2                 0.885    0.033   27.043    0.000    1.155    0.899
##   cuisine =~                                                              
##     Im10                1.000                               0.812    0.923
##     Im14                1.015    0.036   28.479    0.000    0.824    0.952
##   france =~                                                               
##     Im6                 1.000                               0.975    0.813
##     Im7                 1.184    0.071   16.770    0.000    1.155    0.955
##   mode =~                                                                 
##     Im17                1.000                               1.204    0.969
##     Im18                0.994    0.041   24.143    0.000    1.197    0.856
##   professionalism =~                                                      
##     Im16                1.000                               0.921    0.766
##     Im19                1.046    0.061   17.170    0.000    0.963    0.856
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   decoration ~~                                                         
##     atmosphere        0.730    0.082    8.912    0.000    0.467    0.467
##     brand             0.409    0.051    8.040    0.000    0.471    0.471
##     variety           0.711    0.079    9.032    0.000    0.441    0.441
##     cuisine           0.418    0.050    8.393    0.000    0.416    0.416
##     france            0.402    0.063    6.350    0.000    0.334    0.334
##     mode              0.770    0.076   10.140    0.000    0.517    0.517
##     professionalsm    0.743    0.071   10.465    0.000    0.653    0.653
##   atmosphere ~~                                                         
##     brand             0.372    0.053    7.011    0.000    0.418    0.418
##     variety           0.739    0.085    8.728    0.000    0.448    0.448
##     cuisine           0.303    0.051    5.948    0.000    0.295    0.295
##     france            0.410    0.065    6.352    0.000    0.333    0.333
##     mode              0.787    0.081    9.715    0.000    0.516    0.516
##     professionalsm    0.557    0.069    8.089    0.000    0.478    0.478
##   brand ~~                                                              
##     variety           0.439    0.054    8.161    0.000    0.478    0.478
##     cuisine           0.258    0.034    7.662    0.000    0.452    0.452
##     france            0.210    0.037    5.622    0.000    0.306    0.306
##     mode              0.479    0.053    9.046    0.000    0.566    0.566
##     professionalsm    0.343    0.043    7.946    0.000    0.529    0.529
##   variety ~~                                                            
##     cuisine           0.328    0.050    6.584    0.000    0.309    0.309
##     france            0.286    0.060    4.735    0.000    0.225    0.225
##     mode              0.817    0.079   10.362    0.000    0.519    0.519
##     professionalsm    0.717    0.072    9.956    0.000    0.597    0.597
##   cuisine ~~                                                            
##     france            0.463    0.047    9.829    0.000    0.585    0.585
##     mode              0.318    0.047    6.801    0.000    0.325    0.325
##     professionalsm    0.372    0.043    8.589    0.000    0.498    0.498
##   france ~~                                                             
##     mode              0.378    0.061    6.175    0.000    0.322    0.322
##     professionalsm    0.328    0.051    6.438    0.000    0.366    0.366
##   mode ~~                                                               
##     professionalsm    0.667    0.066   10.040    0.000    0.601    0.601
## 
## Intercepts:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .Im3               4.995    0.056   88.560    0.000    4.995    3.786
##    .Im4               4.999    0.057   86.983    0.000    4.999    3.712
##    .Im5               5.035    0.057   87.844    0.000    5.035    3.787
##    .Im20              4.672    0.064   73.177    0.000    4.672    3.123
##    .Im21              5.139    0.058   87.970    0.000    5.139    3.751
##    .Im22              4.279    0.065   65.401    0.000    4.279    2.799
##    .Im11              5.653    0.049  115.271    0.000    5.653    4.943
##    .Im12              5.666    0.049  116.089    0.000    5.666    4.983
##    .Im13              5.448    0.052  105.615    0.000    5.448    4.524
##    .Im1               4.790    0.057   84.202    0.000    4.790    3.597
##    .Im2               4.857    0.055   88.354    0.000    4.857    3.779
##    .Im10              6.100    0.037  162.789    0.000    6.100    6.937
##    .Im14              6.138    0.037  165.861    0.000    6.138    7.093
##    .Im6               5.827    0.051  113.784    0.000    5.827    4.858
##    .Im7               5.753    0.052  110.826    0.000    5.753    4.756
##    .Im17              5.025    0.053   94.519    0.000    5.025    4.041
##    .Im18              4.595    0.060   76.447    0.000    4.595    3.287
##    .Im16              5.135    0.052   99.147    0.000    5.135    4.269
##    .Im19              5.145    0.048  106.948    0.000    5.145    4.574
##     decoration        0.000                               0.000    0.000
##     atmosphere        0.000                               0.000    0.000
##     brand             0.000                               0.000    0.000
##     variety           0.000                               0.000    0.000
##     cuisine           0.000                               0.000    0.000
##     france            0.000                               0.000    0.000
##     mode              0.000                               0.000    0.000
##     professionalsm    0.000                               0.000    0.000
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .Im3               0.213    0.024    8.755    0.000    0.213    0.122
##    .Im4               0.109    0.024    4.532    0.000    0.109    0.060
##    .Im5               0.747    0.049   15.217    0.000    0.747    0.422
##    .Im20              0.638    0.061   10.451    0.000    0.638    0.285
##    .Im21              0.725    0.057   12.672    0.000    0.725    0.386
##    .Im22              0.541    0.063    8.539    0.000    0.541    0.231
##    .Im11              0.814    0.055   14.802    0.000    0.814    0.622
##    .Im12              0.310    0.040    7.845    0.000    0.310    0.240
##    .Im13              0.390    0.045    8.765    0.000    0.390    0.269
##    .Im1               0.070    0.050    1.394    0.163    0.070    0.040
##    .Im2               0.317    0.044    7.233    0.000    0.317    0.192
##    .Im10              0.114    0.019    5.961    0.000    0.114    0.148
##    .Im14              0.070    0.019    3.680    0.000    0.070    0.093
##    .Im6               0.487    0.056    8.677    0.000    0.487    0.339
##    .Im7               0.128    0.067    1.930    0.054    0.128    0.088
##    .Im17              0.095    0.045    2.112    0.035    0.095    0.062
##    .Im18              0.521    0.055    9.540    0.000    0.521    0.267
##    .Im16              0.599    0.052   11.498    0.000    0.599    0.414
##    .Im19              0.338    0.045    7.457    0.000    0.338    0.267
##     decoration        1.528    0.107   14.326    0.000    1.000    1.000
##     atmosphere        1.599    0.138   11.623    0.000    1.000    1.000
##     brand             0.494    0.067    7.361    0.000    1.000    1.000
##     variety           1.704    0.118   14.388    0.000    1.000    1.000
##     cuisine           0.659    0.049   13.328    0.000    1.000    1.000
##     france            0.952    0.095   10.058    0.000    1.000    1.000
##     mode              1.451    0.104   13.988    0.000    1.000    1.000
##     professionalsm    0.849    0.088    9.638    0.000    1.000    1.000

Results

To evaluate whether our model is good or not, we will check the fit measures. The first global fit measure we will consider is the Chi-squared test. According to slide 68 of our course, a low Chi2-value (considering degrees of freedom) indicates a good fit, and the ratio of Chi2-value/df should be below 5 for samples up to 1000. Since we have 553 observations, we calculate the ratio as 259.047/124 = 2.089089, which is below 5. Therefore, our Chi-squared test result is good.

Moving on to the second fit measure, according to slide 69 of our course, we need a Root Mean Square Error of Approximation (RMSEA) below 0.05 to have a good model. In our case, the RMSEA is 0.044, and the robust RMSEA is 0.045, indicating that we have a good model.

Finally, we will check the Comparative Fit Index (CFI) which is the last fit measure from slide 70 of our course. A CFI above 0.95 indicates a good model. In our case, we have a CFI of 0.982, which is above the required threshold, indicating a good model.

Based on these fit measures, we can conclude that our factor analysis was good for our model.

parameterestimates(fit, boot.ci.type = "bca.simple", standardized = TRUE)%>% kable()
lhs op rhs est se z pvalue ci.lower ci.upper std.lv std.all std.nox
decoration =~ Im3 1.0000000 0.0000000 NA NA 1.0000000 1.0000000 1.2359656 0.9367725 0.9367725
decoration =~ Im4 1.0562183 0.0247262 42.716500 0.0000000 1.0077558 1.1046808 1.3054495 0.9694446 0.9694446
decoration =~ Im5 0.8178299 0.0343414 23.814727 0.0000000 0.7505221 0.8851377 1.0108096 0.7601410 0.7601410
atmosphere =~ Im20 1.0000000 0.0000000 NA NA 1.0000000 1.0000000 1.2646012 0.8453892 0.8453892
atmosphere =~ Im21 0.8489179 0.0407673 20.823486 0.0000000 0.7690154 0.9288204 1.0735425 0.7834328 0.7834328
atmosphere =~ Im22 1.0598950 0.0468851 22.606229 0.0000000 0.9680019 1.1517881 1.3403445 0.8767075 0.8767075
brand =~ Im11 1.0000000 0.0000000 NA NA 1.0000000 1.0000000 0.7029587 0.6145742 0.6145742
brand =~ Im12 1.4102158 0.0937252 15.046288 0.0000000 1.2265179 1.5939138 0.9913235 0.8719186 0.8719186
brand =~ Im13 1.4646567 0.1048551 13.968386 0.0000000 1.2591445 1.6701690 1.0295932 0.8549780 0.8549780
variety =~ Im1 1.0000000 0.0000000 NA NA 1.0000000 1.0000000 1.3052701 0.9800185 0.9800185
variety =~ Im2 0.8851714 0.0327322 27.042813 0.0000000 0.8210174 0.9493254 1.1553878 0.8989149 0.8989149
cuisine =~ Im10 1.0000000 0.0000000 NA NA 1.0000000 1.0000000 0.8117871 0.9231434 0.9231434
cuisine =~ Im14 1.0152003 0.0356474 28.478940 0.0000000 0.9453327 1.0850680 0.8241265 0.9523888 0.9523888
france =~ Im6 1.0000000 0.0000000 NA NA 1.0000000 1.0000000 0.9754586 0.8131642 0.8131642
france =~ Im7 1.1842984 0.0706189 16.770282 0.0000000 1.0458880 1.3227089 1.1552341 0.9550804 0.9550804
mode =~ Im17 1.0000000 0.0000000 NA NA 1.0000000 1.0000000 1.2044990 0.9687196 0.9687196
mode =~ Im18 0.9938483 0.0411645 24.143323 0.0000000 0.9131673 1.0745293 1.1970892 0.8563450 0.8563450
professionalism =~ Im16 1.0000000 0.0000000 NA NA 1.0000000 1.0000000 0.9213055 0.7658145 0.7658145
professionalism =~ Im19 1.0455259 0.0608933 17.169812 0.0000000 0.9261773 1.1648745 0.9632488 0.8562871 0.8562871
Im3 ~~ Im3 0.2131716 0.0243487 8.754934 0.0000000 0.1654489 0.2608942 0.2131716 0.1224573 0.1224573
Im4 ~~ Im4 0.1091203 0.0240777 4.532002 0.0000058 0.0619288 0.1563117 0.1091203 0.0601771 0.0601771
Im5 ~~ Im5 0.7465416 0.0490606 15.216713 0.0000000 0.6503845 0.8426986 0.7465416 0.4221857 0.4221857
Im20 ~~ Im20 0.6384421 0.0610890 10.451013 0.0000000 0.5187098 0.7581744 0.6384421 0.2853171 0.2853171
Im21 ~~ Im21 0.7252447 0.0572326 12.671876 0.0000000 0.6130708 0.8374186 0.7252447 0.3862331 0.3862331
Im22 ~~ Im22 0.5408251 0.0633323 8.539482 0.0000000 0.4166961 0.6649542 0.5408251 0.2313840 0.2313840
Im11 ~~ Im11 0.8141599 0.0550047 14.801638 0.0000000 0.7063526 0.9219671 0.8141599 0.6222985 0.6222985
Im12 ~~ Im12 0.3099218 0.0395081 7.844518 0.0000000 0.2324874 0.3873562 0.3099218 0.2397580 0.2397580
Im13 ~~ Im13 0.3901163 0.0445070 8.765285 0.0000000 0.3028842 0.4773483 0.3901163 0.2690126 0.2690126
Im1 ~~ Im1 0.0701827 0.0503511 1.393865 0.1633584 -0.0285037 0.1688690 0.0701827 0.0395638 0.0395638
Im2 ~~ Im2 0.3171108 0.0438406 7.233265 0.0000000 0.2311848 0.4030369 0.3171108 0.1919520 0.1919520
Im10 ~~ Im10 0.1142980 0.0191756 5.960607 0.0000000 0.0767146 0.1518814 0.1142980 0.1478062 0.1478062
Im14 ~~ Im14 0.0696040 0.0189122 3.680371 0.0002329 0.0325367 0.1066713 0.0696040 0.0929555 0.0929555
Im6 ~~ Im6 0.4874817 0.0561797 8.677186 0.0000000 0.3773715 0.5975919 0.4874817 0.3387640 0.3387640
Im7 ~~ Im7 0.1284875 0.0665897 1.929539 0.0536640 -0.0020260 0.2590010 0.1284875 0.0878215 0.0878215
Im17 ~~ Im17 0.0952080 0.0450825 2.111862 0.0346983 0.0068480 0.1835680 0.0952080 0.0615824 0.0615824
Im18 ~~ Im18 0.5211167 0.0546249 9.539915 0.0000000 0.4140539 0.6281795 0.5211167 0.2666733 0.2666733
Im16 ~~ Im16 0.5985014 0.0520508 11.498406 0.0000000 0.4964837 0.7005191 0.5985014 0.4135281 0.4135281
Im19 ~~ Im19 0.3375819 0.0452691 7.457218 0.0000000 0.2488560 0.4263078 0.3375819 0.2667725 0.2667725
decoration ~~ decoration 1.5276110 0.1066312 14.326121 0.0000000 1.3186177 1.7366042 1.0000000 1.0000000 1.0000000
atmosphere ~~ atmosphere 1.5992161 0.1375961 11.622538 0.0000000 1.3295327 1.8688995 1.0000000 1.0000000 1.0000000
brand ~~ brand 0.4941509 0.0671291 7.361201 0.0000000 0.3625803 0.6257216 1.0000000 1.0000000 1.0000000
variety ~~ variety 1.7037299 0.1184131 14.388014 0.0000000 1.4716444 1.9358154 1.0000000 1.0000000 1.0000000
cuisine ~~ cuisine 0.6589982 0.0494455 13.327778 0.0000000 0.5620869 0.7559096 1.0000000 1.0000000 1.0000000
france ~~ france 0.9515194 0.0946007 10.058276 0.0000000 0.7661056 1.1369333 1.0000000 1.0000000 1.0000000
mode ~~ mode 1.4508177 0.1037220 13.987562 0.0000000 1.2475264 1.6541091 1.0000000 1.0000000 1.0000000
professionalism ~~ professionalism 0.8488038 0.0880708 9.637738 0.0000000 0.6761881 1.0214194 1.0000000 1.0000000 1.0000000
decoration ~~ atmosphere 0.7297103 0.0818788 8.912084 0.0000000 0.5692309 0.8901897 0.4668641 0.4668641 0.4668641
decoration ~~ brand 0.4092289 0.0508965 8.040419 0.0000000 0.3094737 0.5089842 0.4710100 0.4710100 0.4710100
decoration ~~ variety 0.7107783 0.0786918 9.032435 0.0000000 0.5565452 0.8650113 0.4405826 0.4405826 0.4405826
decoration ~~ cuisine 0.4178485 0.0497850 8.393057 0.0000000 0.3202716 0.5154253 0.4164571 0.4164571 0.4164571
decoration ~~ france 0.4023560 0.0633605 6.350269 0.0000000 0.2781718 0.5265403 0.3337300 0.3337300 0.3337300
decoration ~~ mode 0.7699775 0.0759369 10.139703 0.0000000 0.6211439 0.9188110 0.5172080 0.5172080 0.5172080
decoration ~~ professionalism 0.7434486 0.0710411 10.465043 0.0000000 0.6042106 0.8826867 0.6528914 0.6528914 0.6528914
atmosphere ~~ brand 0.3718641 0.0530375 7.011348 0.0000000 0.2679126 0.4758156 0.4183126 0.4183126 0.4183126
atmosphere ~~ variety 0.7394490 0.0847246 8.727682 0.0000000 0.5733919 0.9055061 0.4479755 0.4479755 0.4479755
atmosphere ~~ cuisine 0.3026751 0.0508836 5.948386 0.0000000 0.2029451 0.4024051 0.2948363 0.2948363 0.2948363
atmosphere ~~ france 0.4102130 0.0645828 6.351736 0.0000000 0.2836330 0.5367930 0.3325424 0.3325424 0.3325424
atmosphere ~~ mode 0.7866627 0.0809721 9.715233 0.0000000 0.6279603 0.9453650 0.5164503 0.5164503 0.5164503
atmosphere ~~ professionalism 0.5567716 0.0688297 8.089122 0.0000000 0.4218679 0.6916753 0.4778811 0.4778811 0.4778811
brand ~~ variety 0.4385490 0.0537367 8.161069 0.0000000 0.3332270 0.5438710 0.4779560 0.4779560 0.4779560
brand ~~ cuisine 0.2581081 0.0336849 7.662427 0.0000000 0.1920869 0.3241292 0.4523032 0.4523032 0.4523032
brand ~~ france 0.2095183 0.0372700 5.621629 0.0000000 0.1364704 0.2825662 0.3055507 0.3055507 0.3055507
brand ~~ mode 0.4794807 0.0530033 9.046244 0.0000000 0.3755962 0.5833653 0.5662848 0.5662848 0.5662848
brand ~~ professionalism 0.3426340 0.0431207 7.945934 0.0000000 0.2581190 0.4271489 0.5290503 0.5290503 0.5290503
variety ~~ cuisine 0.3278522 0.0497940 6.584167 0.0000000 0.2302577 0.4254467 0.3094109 0.3094109 0.3094109
variety ~~ france 0.2864239 0.0604918 4.734924 0.0000022 0.1678622 0.4049856 0.2249573 0.2249573 0.2249573
variety ~~ mode 0.8166276 0.0788081 10.362234 0.0000000 0.6621666 0.9710885 0.5194183 0.5194183 0.5194183
variety ~~ professionalism 0.7174390 0.0720626 9.955777 0.0000000 0.5761989 0.8586790 0.5965968 0.5965968 0.5965968
cuisine ~~ france 0.4634995 0.0471579 9.828681 0.0000000 0.3710718 0.5559272 0.5853266 0.5853266 0.5853266
cuisine ~~ mode 0.3180388 0.0467609 6.801386 0.0000000 0.2263892 0.4096885 0.3252607 0.3252607 0.3252607
cuisine ~~ professionalism 0.3722167 0.0433381 8.588667 0.0000000 0.2872756 0.4571579 0.4976799 0.4976799 0.4976799
france ~~ mode 0.3781934 0.0612427 6.175317 0.0000000 0.2581598 0.4982270 0.3218835 0.3218835 0.3218835
france ~~ professionalism 0.3284956 0.0510209 6.438446 0.0000000 0.2284964 0.4284948 0.3655250 0.3655250 0.3655250
mode ~~ professionalism 0.6665756 0.0663933 10.039803 0.0000000 0.5364472 0.7967041 0.6006747 0.6006747 0.6006747
Im3 ~1 4.9951357 0.0564037 88.560425 0.0000000 4.8845865 5.1056850 4.9951357 3.7859514 3.7859514
Im4 ~1 4.9985357 0.0574659 86.982685 0.0000000 4.8859047 5.1111668 4.9985357 3.7119809 3.7119809
Im5 ~1 5.0354361 0.0573222 87.844461 0.0000000 4.9230867 5.1477855 5.0354361 3.7867084 3.7867084
Im20 ~1 4.6721297 0.0638470 73.176928 0.0000000 4.5469918 4.7972675 4.6721297 3.1233310 3.1233310
Im21 ~1 5.1393791 0.0584218 87.970159 0.0000000 5.0248744 5.2538838 5.1393791 3.7505341 3.7505341
Im22 ~1 4.2788656 0.0654246 65.401448 0.0000000 4.1506357 4.4070955 4.2788656 2.7987681 2.7987681
Im11 ~1 5.6533330 0.0490440 115.270730 0.0000000 5.5572086 5.7494574 5.6533330 4.9425276 4.9425276
Im12 ~1 5.6655222 0.0488034 116.088770 0.0000000 5.5698693 5.7611750 5.6655222 4.9831101 4.9831101
Im13 ~1 5.4481356 0.0515847 105.615358 0.0000000 5.3470314 5.5492397 5.4481356 4.5241521 4.5241521
Im1 ~1 4.7904479 0.0568921 84.202388 0.0000000 4.6789415 4.9019544 4.7904479 3.5967481 3.5967481
Im2 ~1 4.8567638 0.0549691 88.354396 0.0000000 4.7490262 4.9645013 4.8567638 3.7786598 3.7786598
Im10 ~1 6.0998164 0.0374707 162.788945 0.0000000 6.0263752 6.1732576 6.0998164 6.9365548 6.9365548
Im14 ~1 6.1379973 0.0370068 165.861320 0.0000000 6.0654653 6.2105293 6.1379973 7.0932802 7.0932802
Im6 ~1 5.8271294 0.0512121 113.784210 0.0000000 5.7267555 5.9275033 5.8271294 4.8576261 4.8576261
Im7 ~1 5.7528798 0.0519089 110.826392 0.0000000 5.6511401 5.8546194 5.7528798 4.7561466 4.7561466
Im17 ~1 5.0245978 0.0531596 94.519090 0.0000000 4.9204069 5.1287887 5.0245978 4.0410381 4.0410381
Im18 ~1 4.5945802 0.0601017 76.446719 0.0000000 4.4767829 4.7123774 4.5945802 3.2867606 3.2867606
Im16 ~1 5.1353964 0.0517956 99.147426 0.0000000 5.0338790 5.2369138 5.1353964 4.2686831 4.2686831
Im19 ~1 5.1451102 0.0481086 106.947872 0.0000000 5.0508191 5.2394013 5.1451102 4.5737835 4.5737835
decoration ~1 0.0000000 0.0000000 NA NA 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000
atmosphere ~1 0.0000000 0.0000000 NA NA 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000
brand ~1 0.0000000 0.0000000 NA NA 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000
variety ~1 0.0000000 0.0000000 NA NA 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000
cuisine ~1 0.0000000 0.0000000 NA NA 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000
france ~1 0.0000000 0.0000000 NA NA 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000
mode ~1 0.0000000 0.0000000 NA NA 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000
professionalism ~1 0.0000000 0.0000000 NA NA 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000

We can observe that all values are significant. During class, we learned that upper and lower case should not include 0, except for IM 1 and IM 7 in relation to themselves. Despite this exception, the results are still satisfactory.

#calculating reliability
CronReli=cronbach(subset(myData, select = c(Im3,Im4,Im5)))
CronReli
## $sample.size
## [1] 508
## 
## $number.of.items
## [1] 3
## 
## $alpha
## [1] 0.9151505
#calculating reliability
CronReli=cronbach(subset(myData, select = c(Im20,Im21,Im22)))
CronReli
## $sample.size
## [1] 525
## 
## $number.of.items
## [1] 3
## 
## $alpha
## [1] 0.8749604
#calculating reliability
CronReli=cronbach(subset(myData, select = c(Im11,Im12,Im13)))
CronReli
## $sample.size
## [1] 520
## 
## $number.of.items
## [1] 3
## 
## $alpha
## [1] 0.8127499
#calculating reliability
CronReli=cronbach(subset(myData, select = c(Im1,Im2)))
CronReli
## $sample.size
## [1] 525
## 
## $number.of.items
## [1] 2
## 
## $alpha
## [1] 0.9372013
#calculating reliability
CronReli=cronbach(subset(myData, select = c(Im10,Im14)))
CronReli
## $sample.size
## [1] 525
## 
## $number.of.items
## [1] 2
## 
## $alpha
## [1] 0.9334071
#calculating reliability
CronReli=cronbach(subset(myData, select = c(Im6,Im7)))
CronReli
## $sample.size
## [1] 520
## 
## $number.of.items
## [1] 2
## 
## $alpha
## [1] 0.8758912
#calculating reliability
CronReli=cronbach(subset(myData, select = c(Im17,Im18)))
CronReli
## $sample.size
## [1] 521
## 
## $number.of.items
## [1] 2
## 
## $alpha
## [1] 0.9039139
#calculating reliability
CronReli=cronbach(subset(myData, select = c(Im16,Im19)))
CronReli
## $sample.size
## [1] 520
## 
## $number.of.items
## [1] 2
## 
## $alpha
## [1] 0.7940545

All of Cronbach’s alpha greater than 0.7 indicating that the observed variables have sufficient reliability

std_fit=inspect(fit, "std")
std_fit$psi
##                 decrtn atmsph brand varity cuisin france  mode prfssn
## decoration       1.000                                               
## atmosphere       0.467  1.000                                        
## brand            0.471  0.418 1.000                                  
## variety          0.441  0.448 0.478  1.000                           
## cuisine          0.416  0.295 0.452  0.309  1.000                    
## france           0.334  0.333 0.306  0.225  0.585  1.000             
## mode             0.517  0.516 0.566  0.519  0.325  0.322 1.000       
## professionalism  0.653  0.478 0.529  0.597  0.498  0.366 0.601  1.000

some comment on covariances include:

  • decoration and professionalism (0.653): This suggests a strong positive relationship between these two factors, meaning that as the decoration quality increases, professionalism is also likely to increase.

  • cuisine and atmosphere (0.295): This suggests a relatively weak positive relationship between these two factors, meaning that the quality of the cuisine is not strongly related to the atmosphere of the establishment.


std_fit$lambda
##      decrtn atmsph brand varity cuisin france  mode prfssn
## Im3   0.937  0.000 0.000  0.000  0.000  0.000 0.000  0.000
## Im4   0.969  0.000 0.000  0.000  0.000  0.000 0.000  0.000
## Im5   0.760  0.000 0.000  0.000  0.000  0.000 0.000  0.000
## Im20  0.000  0.845 0.000  0.000  0.000  0.000 0.000  0.000
## Im21  0.000  0.783 0.000  0.000  0.000  0.000 0.000  0.000
## Im22  0.000  0.877 0.000  0.000  0.000  0.000 0.000  0.000
## Im11  0.000  0.000 0.615  0.000  0.000  0.000 0.000  0.000
## Im12  0.000  0.000 0.872  0.000  0.000  0.000 0.000  0.000
## Im13  0.000  0.000 0.855  0.000  0.000  0.000 0.000  0.000
## Im1   0.000  0.000 0.000  0.980  0.000  0.000 0.000  0.000
## Im2   0.000  0.000 0.000  0.899  0.000  0.000 0.000  0.000
## Im10  0.000  0.000 0.000  0.000  0.923  0.000 0.000  0.000
## Im14  0.000  0.000 0.000  0.000  0.952  0.000 0.000  0.000
## Im6   0.000  0.000 0.000  0.000  0.000  0.813 0.000  0.000
## Im7   0.000  0.000 0.000  0.000  0.000  0.955 0.000  0.000
## Im17  0.000  0.000 0.000  0.000  0.000  0.000 0.969  0.000
## Im18  0.000  0.000 0.000  0.000  0.000  0.000 0.856  0.000
## Im16  0.000  0.000 0.000  0.000  0.000  0.000 0.000  0.766
## Im19  0.000  0.000 0.000  0.000  0.000  0.000 0.000  0.856

Each construct has loadings greater than 0.7 except for Im11, indicating that at least 50% of the variance in each indicator is explained by the underlying construct.

std_fit$theta
##        Im3   Im4   Im5  Im20  Im21  Im22  Im11  Im12  Im13   Im1   Im2  Im10
## Im3  0.122                                                                  
## Im4  0.000 0.060                                                            
## Im5  0.000 0.000 0.422                                                      
## Im20 0.000 0.000 0.000 0.285                                                
## Im21 0.000 0.000 0.000 0.000 0.386                                          
## Im22 0.000 0.000 0.000 0.000 0.000 0.231                                    
## Im11 0.000 0.000 0.000 0.000 0.000 0.000 0.622                              
## Im12 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.240                        
## Im13 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.269                  
## Im1  0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.040            
## Im2  0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.192      
## Im10 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.148
## Im14 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## Im6  0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## Im7  0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## Im17 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## Im18 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## Im16 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## Im19 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
##       Im14   Im6   Im7  Im17  Im18  Im16  Im19
## Im3                                           
## Im4                                           
## Im5                                           
## Im20                                          
## Im21                                          
## Im22                                          
## Im11                                          
## Im12                                          
## Im13                                          
## Im1                                           
## Im2                                           
## Im10                                          
## Im14 0.093                                    
## Im6  0.000 0.339                              
## Im7  0.000 0.000 0.088                        
## Im17 0.000 0.000 0.000 0.062                  
## Im18 0.000 0.000 0.000 0.000 0.267            
## Im16 0.000 0.000 0.000 0.000 0.000 0.414      
## Im19 0.000 0.000 0.000 0.000 0.000 0.000 0.267

From theta matrix we can see that the diagonal values range from 0.040(IM1) to 0.622(Im11). Lower values indicate that the latent factors account for a larger proportion of the variance in the observed variables, while higher values suggest that the latent factors explain less of the variance in the observed variables

modificationindices(fit) %>% filter(mi>10)
##           lhs op  rhs     mi    epc sepc.lv sepc.all sepc.nox
## 1  atmosphere =~ Im12 10.952 -0.115  -0.145   -0.127   -0.127
## 2     variety =~ Im20 14.777 -0.151  -0.197   -0.132   -0.132
## 3     variety =~ Im12 10.663 -0.111  -0.145   -0.127   -0.127
## 4     variety =~ Im13 13.970  0.133   0.174    0.144    0.144
## 5     cuisine =~ Im11 12.742  0.215   0.174    0.152    0.152
## 6        mode =~ Im12 17.245 -0.179  -0.216   -0.190   -0.190
## 7        mode =~ Im13 23.832  0.220   0.265    0.220    0.220
## 8        Im20 ~~ Im21 11.455  0.228   0.228    0.335    0.335
## 9        Im21 ~~ Im22 15.140 -0.285  -0.285   -0.455   -0.455
## 10       Im11 ~~ Im12 13.307  0.145   0.145    0.288    0.288
## 11       Im11 ~~ Im13 21.323 -0.191  -0.191   -0.338   -0.338
## 12       Im13 ~~  Im1 10.708  0.068   0.068    0.409    0.409

Since we do not have a Large Modification Indices (mi) that indicate that we have a good model.

Question 2

Are the mechanism driving satisfaction and affective commitment similar? Are satisfaction and affective commitment mediating the impact of image perceptions on outcomes? If yes for which outcomes?

To create a structural equation model, we need to consider the relationships between the observed variables and latent variables based on the known structure:

Images → Mediators → Outcomes

For this model, we have:

  1. Images: The 8 dimensions found previously.

  2. Mediators: Affective Commitment (with COM_A1 – COM_A4) and Customer Satisfaction (with SAT_1 – SAT_3).

  3. Outcomes: Repurchase Intention (with C_REP1 – C_REP3) and Co-creation (with C_CR1, C_CR3, C_CR4).

Now let’s define the model:

  1. Images: These are the 8 dimensions we identified earlier, which serve as the predictor variables in our model.

  2. Mediators:

    • Affective Commitment (AC) =~ COM_A1 + COM_A2 + COM_A3 + COM_A4

    • Customer Satisfaction (CS) =~ SAT_1 + SAT_2 + SAT_3

  3. Outcomes:

    • Repurchase Intention (RI) =~ C_REP1 + C_REP2 + C_REP3

    • Co-creation (CC) =~ C_CR1 + C_CR3 + C_CR4

Model

model1 <- "
# measurement model (=~)
decoration=~ Im3+Im4+Im5
atmosphere=~ Im20+Im21+Im22
brand=~ Im11+Im12+Im13
variety=~ Im1 + Im2
cuisine=~ Im10+ Im14
france=~ Im6 + Im7
mode=~ Im17 + Im18
professionalism =~ Im16+ Im19

  satisfaction =~ SAT_1 + SAT_2 + SAT_3
  commitment =~ COM_A1 + COM_A2 + COM_A3 + COM_A4
  cocreation =~ C_CR1 + C_CR3 + C_CR4
  repurchase =~ C_REP1 + C_REP2 + C_REP3

# Structural model ( ~)
cocreation ~ a * satisfaction + b * commitment
repurchase ~ c * satisfaction + d * commitment 

satisfaction ~ e * professionalism + f * mode + g * france + h * cuisine + i * variety + j * brand + k * atmosphere + l * decoration
commitment ~ m * professionalism + n * mode + o * france + p * cuisine + q * variety + r * brand + s * atmosphere + t * decoration

cocreation ~ u * professionalism + v * mode + w * france + x * cuisine + y * variety + z * brand+ aa * atmosphere + bb * decoration
repurchase ~  cc * professionalism + dd * mode + ee * france + ff * cuisine + gg * variety + hh * brand + ii * atmosphere + jj * decoration


# indirect effect (:=)
# for cocreation: 
  ae:=a*e
  af:=a*f
  ag:=a*g
  ah:=a*h
  ai:=a*i
  aj:=a*j
  ak:=a*k
  al:=a*l
  
  bm:=b*m
  bn:=b*n
  bo:=b*o
  bp:=b*p
  bq:=b*q
  br:=b*r
  bs:=b*s
  bt:=b*t
  
# for repurchase

  ce:=c*e
  cf:=c*f
  cg:=c*g
  ch:=c*h
  ci:=c*i
  cj:=c*j
  ck:=c*k
  cl:=c*l
  
  dm:=d*m
  dn:=d*n
  do:=d*o
  dp:=d*p
  dq:=d*q
  dr:=d*r
  ds:=d*s
  dt:=d*t
  
# Total effects ( := TE)
# for cocreation
TE1C:= u + (a*e) + (b*m)
TE2C:= v + (a*f) + (b*n)
TE3C:= w + (a*g) + (b*o)
TE4C:= x + (a*h) + (b*p)
TE5C:= y + (a*i) + (b*q)
TE6C:= z + (a*j) + (b*r)
TE7C:= aa + (a*k) + (b*s)
TE8C:= bb + (a*l) + (b*t)

# for repurchase 
TE1R:= cc + (c*e) + (d*m)
TE2R:= dd + (c*f) + (d*n)
TE3R:= ee + (c*g) + (d*o)
TE4R:= ff + (c*h) + (d*p)
TE5R:= gg + (c*i) + (d*q)
TE6R:= hh + (c*j) + (d*r)
TE7R:= ii + (c*k) + (d*s)
TE8R:= jj + (c*l) + (d*t)

# total indirect effect
# for cocreation 
TIE1C:=  (a*e) + (b*m)
TIE2C:=  (a*f) + (b*n)
TIE3C:=  (a*g) + (b*o)
TIE4C:=  (a*h) + (b*p)
TIE5C:=  (a*i) + (b*q)
TIE6C:=  (a*j) + (b*r)
TIE7C:=  (a*k) + (b*s)
TIE8C:=  (a*l) + (b*t)

# for repurchase 
TIE1R:=  (c*e) + (d*m)
TIE2R:=  (c*f) + (d*n)
TIE3R:=  (c*g) + (d*o)
TIE4R:=  (c*h) + (d*p)
TIE5R:=  (c*i) + (d*q)
TIE6R:=  (c*j) + (d*r)
TIE7R:=  (c*k) + (d*s)
TIE8R:=  (c*l) + (d*t)

"
fit1<-cfa(model1, data=myData,estimator="MLR", missing="ML")

summary(fit1, fit.measures=TRUE,standardized=TRUE)
## lavaan 0.6.15 ended normally after 149 iterations
## 
##   Estimator                                         ML
##   Optimization method                           NLMINB
##   Number of model parameters                       161
## 
##   Number of observations                           553
##   Number of missing patterns                       135
## 
## Model Test User Model:
##                                               Standard      Scaled
##   Test Statistic                               700.455     632.247
##   Degrees of freedom                               399         399
##   P-value (Chi-square)                           0.000       0.000
##   Scaling correction factor                                  1.108
##     Yuan-Bentler correction (Mplus variant)                       
## 
## Model Test Baseline Model:
## 
##   Test statistic                             11978.557    9969.592
##   Degrees of freedom                               496         496
##   P-value                                        0.000       0.000
##   Scaling correction factor                                  1.202
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    0.974       0.975
##   Tucker-Lewis Index (TLI)                       0.967       0.969
##                                                                   
##   Robust Comparative Fit Index (CFI)                         0.979
##   Robust Tucker-Lewis Index (TLI)                            0.973
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)             -22368.900  -22368.900
##   Scaling correction factor                                  1.404
##       for the MLR correction                                      
##   Loglikelihood unrestricted model (H1)     -22018.673  -22018.673
##   Scaling correction factor                                  1.193
##       for the MLR correction                                      
##                                                                   
##   Akaike (AIC)                               45059.800   45059.800
##   Bayesian (BIC)                             45754.573   45754.573
##   Sample-size adjusted Bayesian (SABIC)      45243.488   45243.488
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.037       0.033
##   90 Percent confidence interval - lower         0.032       0.028
##   90 Percent confidence interval - upper         0.041       0.037
##   P-value H_0: RMSEA <= 0.050                    1.000       1.000
##   P-value H_0: RMSEA >= 0.080                    0.000       0.000
##                                                                   
##   Robust RMSEA                                               0.034
##   90 Percent confidence interval - lower                     0.029
##   90 Percent confidence interval - upper                     0.039
##   P-value H_0: Robust RMSEA <= 0.050                         1.000
##   P-value H_0: Robust RMSEA >= 0.080                         0.000
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.041       0.041
## 
## Parameter Estimates:
## 
##   Standard errors                             Sandwich
##   Information bread                           Observed
##   Observed information based on                Hessian
## 
## Latent Variables:
##                      Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   decoration =~                                                           
##     Im3                 1.000                               1.235    0.936
##     Im4                 1.057    0.028   37.421    0.000    1.306    0.970
##     Im5                 0.818    0.046   17.727    0.000    1.011    0.760
##   atmosphere =~                                                           
##     Im20                1.000                               1.262    0.844
##     Im21                0.857    0.046   18.667    0.000    1.081    0.789
##     Im22                1.056    0.048   21.798    0.000    1.333    0.873
##   brand =~                                                                
##     Im11                1.000                               0.701    0.613
##     Im12                1.414    0.113   12.568    0.000    0.991    0.872
##     Im13                1.468    0.140   10.474    0.000    1.029    0.855
##   variety =~                                                              
##     Im1                 1.000                               1.297    0.974
##     Im2                 0.896    0.035   25.253    0.000    1.162    0.904
##   cuisine =~                                                              
##     Im10                1.000                               0.810    0.921
##     Im14                1.021    0.041   24.871    0.000    0.827    0.955
##   france =~                                                               
##     Im6                 1.000                               0.987    0.822
##     Im7                 1.158    0.076   15.174    0.000    1.143    0.944
##   mode =~                                                                 
##     Im17                1.000                               1.205    0.970
##     Im18                0.992    0.042   23.744    0.000    1.196    0.855
##   professionalism =~                                                      
##     Im16                1.000                               0.919    0.764
##     Im19                1.043    0.071   14.680    0.000    0.959    0.853
##   satisfaction =~                                                         
##     SAT_1               1.000                               0.882    0.865
##     SAT_2               0.933    0.059   15.698    0.000    0.823    0.819
##     SAT_3               0.809    0.061   13.271    0.000    0.714    0.624
##   commitment =~                                                           
##     COM_A1              1.000                               1.144    0.796
##     COM_A2              1.174    0.049   23.795    0.000    1.342    0.836
##     COM_A3              1.162    0.059   19.802    0.000    1.329    0.817
##     COM_A4              1.278    0.064   20.041    0.000    1.462    0.842
##   cocreation =~                                                           
##     C_CR1               1.000                               1.658    0.851
##     C_CR3               1.033    0.056   18.597    0.000    1.712    0.826
##     C_CR4               0.963    0.056   17.089    0.000    1.597    0.806
##   repurchase =~                                                           
##     C_REP1              1.000                               0.596    0.816
##     C_REP2              0.971    0.048   20.251    0.000    0.579    0.931
##     C_REP3              0.702    0.057   12.368    0.000    0.419    0.756
## 
## Regressions:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   cocreation ~                                                          
##     satisfctn  (a)   -0.357    0.143   -2.501    0.012   -0.190   -0.190
##     commitmnt  (b)    0.546    0.094    5.824    0.000    0.377    0.377
##   repurchase ~                                                          
##     satisfctn  (c)    0.215    0.049    4.396    0.000    0.318    0.318
##     commitmnt  (d)    0.184    0.031    5.882    0.000    0.354    0.354
##   satisfaction ~                                                        
##     prfssnlsm  (e)    0.459    0.105    4.382    0.000    0.479    0.479
##     mode       (f)    0.008    0.061    0.131    0.896    0.011    0.011
##     france     (g)    0.103    0.053    1.934    0.053    0.115    0.115
##     cuisine    (h)    0.081    0.075    1.069    0.285    0.074    0.074
##     variety    (i)    0.134    0.053    2.512    0.012    0.197    0.197
##     brand      (j)   -0.038    0.095   -0.400    0.689   -0.030   -0.030
##     atmospher  (k)    0.052    0.044    1.169    0.243    0.074    0.074
##     decoratin  (l)   -0.109    0.048   -2.285    0.022   -0.152   -0.152
##   commitment ~                                                          
##     prfssnlsm  (m)    0.160    0.129    1.240    0.215    0.129    0.129
##     mode       (n)   -0.018    0.068   -0.260    0.795   -0.019   -0.019
##     france     (o)    0.223    0.067    3.327    0.001    0.192    0.192
##     cuisine    (p)    0.028    0.090    0.308    0.758    0.020    0.020
##     variety    (q)    0.101    0.055    1.840    0.066    0.114    0.114
##     brand      (r)   -0.187    0.116   -1.614    0.106   -0.115   -0.115
##     atmospher  (s)    0.373    0.059    6.359    0.000    0.411    0.411
##     decoratin  (t)   -0.024    0.058   -0.413    0.680   -0.026   -0.026
##   cocreation ~                                                          
##     prfssnlsm  (u)   -0.176    0.194   -0.908    0.364   -0.098   -0.098
##     mode       (v)    0.022    0.091    0.245    0.807    0.016    0.016
##     france     (w)   -0.127    0.110   -1.152    0.249   -0.075   -0.075
##     cuisine    (x)   -0.080    0.142   -0.560    0.575   -0.039   -0.039
##     variety    (y)   -0.006    0.083   -0.074    0.941   -0.005   -0.005
##     brand      (z)    0.197    0.149    1.318    0.187    0.083    0.083
##     atmospher (aa)    0.152    0.093    1.633    0.103    0.116    0.116
##     decoratin (bb)   -0.031    0.100   -0.311    0.756   -0.023   -0.023
##   repurchase ~                                                          
##     prfssnlsm (cc)   -0.037    0.056   -0.655    0.513   -0.056   -0.056
##     mode      (dd)   -0.011    0.028   -0.385    0.701   -0.022   -0.022
##     france    (ee)   -0.034    0.031   -1.073    0.283   -0.056   -0.056
##     cuisine   (ff)    0.038    0.047    0.806    0.420    0.051    0.051
##     variety   (gg)   -0.017    0.024   -0.688    0.491   -0.037   -0.037
##     brand     (hh)    0.077    0.052    1.490    0.136    0.091    0.091
##     atmospher (ii)    0.040    0.029    1.375    0.169    0.085    0.085
##     decoratin (jj)    0.010    0.027    0.358    0.720    0.020    0.020
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   decoration ~~                                                         
##     atmosphere        0.728    0.086    8.422    0.000    0.467    0.467
##     brand             0.407    0.054    7.531    0.000    0.470    0.470
##     variety           0.708    0.076    9.364    0.000    0.442    0.442
##     cuisine           0.417    0.053    7.922    0.000    0.417    0.417
##     france            0.413    0.074    5.553    0.000    0.339    0.339
##     mode              0.769    0.080    9.568    0.000    0.516    0.516
##     professionalsm    0.744    0.078    9.528    0.000    0.655    0.655
##   atmosphere ~~                                                         
##     brand             0.370    0.064    5.772    0.000    0.418    0.418
##     variety           0.732    0.084    8.665    0.000    0.447    0.447
##     cuisine           0.301    0.053    5.668    0.000    0.295    0.295
##     france            0.415    0.074    5.579    0.000    0.333    0.333
##     mode              0.785    0.083    9.464    0.000    0.516    0.516
##     professionalsm    0.552    0.073    7.616    0.000    0.476    0.476
##   brand ~~                                                              
##     variety           0.433    0.060    7.267    0.000    0.477    0.477
##     cuisine           0.256    0.040    6.399    0.000    0.452    0.452
##     france            0.211    0.039    5.423    0.000    0.305    0.305
##     mode              0.477    0.064    7.438    0.000    0.565    0.565
##     professionalsm    0.342    0.044    7.744    0.000    0.531    0.531
##   variety ~~                                                            
##     cuisine           0.327    0.050    6.583    0.000    0.312    0.312
##     france            0.292    0.063    4.632    0.000    0.228    0.228
##     mode              0.814    0.088    9.249    0.000    0.521    0.521
##     professionalsm    0.717    0.075    9.576    0.000    0.602    0.602
##   cuisine ~~                                                            
##     france            0.469    0.050    9.313    0.000    0.587    0.587
##     mode              0.317    0.042    7.576    0.000    0.325    0.325
##     professionalsm    0.371    0.047    7.848    0.000    0.499    0.499
##   france ~~                                                             
##     mode              0.389    0.068    5.721    0.000    0.327    0.327
##     professionalsm    0.336    0.051    6.602    0.000    0.370    0.370
##   mode ~~                                                               
##     professionalsm    0.667    0.068    9.845    0.000    0.602    0.602
##  .cocreation ~~                                                         
##    .repurchase       -0.015    0.034   -0.434    0.664   -0.020   -0.020
## 
## Intercepts:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .Im3               4.995    0.056   88.622    0.000    4.995    3.786
##    .Im4               4.999    0.057   87.009    0.000    4.999    3.712
##    .Im5               5.036    0.057   87.765    0.000    5.036    3.787
##    .Im20              4.672    0.064   73.268    0.000    4.672    3.125
##    .Im21              5.139    0.058   88.093    0.000    5.139    3.750
##    .Im22              4.280    0.065   65.575    0.000    4.280    2.802
##    .Im11              5.653    0.049  115.355    0.000    5.653    4.944
##    .Im12              5.665    0.049  116.260    0.000    5.665    4.986
##    .Im13              5.448    0.052  105.700    0.000    5.448    4.527
##    .Im1               4.792    0.057   84.272    0.000    4.792    3.600
##    .Im2               4.858    0.055   88.351    0.000    4.858    3.781
##    .Im10              6.100    0.037  162.837    0.000    6.100    6.936
##    .Im14              6.138    0.037  165.572    0.000    6.138    7.093
##    .Im6               5.828    0.051  114.014    0.000    5.828    4.858
##    .Im7               5.754    0.052  110.958    0.000    5.754    4.756
##    .Im17              5.025    0.053   94.433    0.000    5.025    4.042
##    .Im18              4.595    0.060   76.160    0.000    4.595    3.287
##    .Im16              5.135    0.052   99.250    0.000    5.135    4.270
##    .Im19              5.145    0.048  106.953    0.000    5.145    4.576
##    .SAT_1             5.343    0.044  122.780    0.000    5.343    5.239
##    .SAT_2             5.482    0.043  127.736    0.000    5.482    5.455
##    .SAT_3             5.458    0.050  109.045    0.000    5.458    4.774
##    .COM_A1            4.287    0.062   69.635    0.000    4.287    2.983
##    .COM_A2            3.887    0.069   56.723    0.000    3.887    2.420
##    .COM_A3            3.543    0.070   50.824    0.000    3.543    2.178
##    .COM_A4            3.456    0.074   46.674    0.000    3.456    1.991
##    .C_CR1             2.679    0.083   32.267    0.000    2.679    1.375
##    .C_CR3             3.261    0.088   37.085    0.000    3.261    1.572
##    .C_CR4             2.786    0.084   33.126    0.000    2.786    1.405
##    .C_REP1            4.283    0.031  136.245    0.000    4.283    5.859
##    .C_REP2            4.507    0.027  167.452    0.000    4.507    7.250
##    .C_REP3            4.677    0.024  193.058    0.000    4.677    8.445
##     decoration        0.000                               0.000    0.000
##     atmosphere        0.000                               0.000    0.000
##     brand             0.000                               0.000    0.000
##     variety           0.000                               0.000    0.000
##     cuisine           0.000                               0.000    0.000
##     france            0.000                               0.000    0.000
##     mode              0.000                               0.000    0.000
##     professionalsm    0.000                               0.000    0.000
##    .satisfaction      0.000                               0.000    0.000
##    .commitment        0.000                               0.000    0.000
##    .cocreation        0.000                               0.000    0.000
##    .repurchase        0.000                               0.000    0.000
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .Im3               0.214    0.042    5.082    0.000    0.214    0.123
##    .Im4               0.108    0.031    3.507    0.000    0.108    0.060
##    .Im5               0.747    0.066   11.357    0.000    0.747    0.422
##    .Im20              0.644    0.076    8.530    0.000    0.644    0.288
##    .Im21              0.708    0.093    7.634    0.000    0.708    0.377
##    .Im22              0.557    0.076    7.288    0.000    0.557    0.239
##    .Im11              0.817    0.091    8.950    0.000    0.817    0.625
##    .Im12              0.309    0.055    5.592    0.000    0.309    0.240
##    .Im13              0.389    0.055    7.021    0.000    0.389    0.269
##    .Im1               0.090    0.052    1.716    0.086    0.090    0.051
##    .Im2               0.302    0.051    5.914    0.000    0.302    0.183
##    .Im10              0.118    0.029    4.073    0.000    0.118    0.153
##    .Im14              0.066    0.022    3.010    0.003    0.066    0.088
##    .Im6               0.466    0.067    6.998    0.000    0.466    0.324
##    .Im7               0.158    0.072    2.198    0.028    0.158    0.108
##    .Im17              0.092    0.046    1.990    0.047    0.092    0.060
##    .Im18              0.524    0.088    5.965    0.000    0.524    0.268
##    .Im16              0.602    0.071    8.415    0.000    0.602    0.416
##    .Im19              0.345    0.052    6.650    0.000    0.345    0.273
##    .SAT_1             0.262    0.038    6.976    0.000    0.262    0.252
##    .SAT_2             0.332    0.061    5.485    0.000    0.332    0.329
##    .SAT_3             0.798    0.165    4.832    0.000    0.798    0.610
##    .COM_A1            0.757    0.074   10.270    0.000    0.757    0.366
##    .COM_A2            0.779    0.084    9.326    0.000    0.779    0.302
##    .COM_A3            0.880    0.079   11.170    0.000    0.880    0.333
##    .COM_A4            0.875    0.080   10.987    0.000    0.875    0.290
##    .C_CR1             1.047    0.144    7.285    0.000    1.047    0.276
##    .C_CR3             1.369    0.192    7.125    0.000    1.369    0.318
##    .C_CR4             1.378    0.204    6.766    0.000    1.378    0.351
##    .C_REP1            0.179    0.027    6.733    0.000    0.179    0.334
##    .C_REP2            0.051    0.012    4.160    0.000    0.051    0.133
##    .C_REP3            0.131    0.012   10.620    0.000    0.131    0.428
##     decoration        1.526    0.105   14.501    0.000    1.000    1.000
##     atmosphere        1.591    0.138   11.533    0.000    1.000    1.000
##     brand             0.491    0.088    5.558    0.000    1.000    1.000
##     variety           1.682    0.114   14.718    0.000    1.000    1.000
##     cuisine           0.655    0.066    9.882    0.000    1.000    1.000
##     france            0.974    0.111    8.760    0.000    1.000    1.000
##     mode              1.453    0.116   12.551    0.000    1.000    1.000
##     professionalsm    0.845    0.101    8.367    0.000    1.000    1.000
##    .satisfaction      0.449    0.063    7.166    0.000    0.576    0.576
##    .commitment        0.862    0.088    9.757    0.000    0.659    0.659
##    .cocreation        2.280    0.220   10.373    0.000    0.829    0.829
##    .repurchase        0.237    0.025    9.638    0.000    0.667    0.667
## 
## Defined Parameters:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##     ae               -0.164    0.078   -2.098    0.036   -0.091   -0.091
##     af               -0.003    0.022   -0.131    0.896   -0.002   -0.002
##     ag               -0.037    0.024   -1.532    0.126   -0.022   -0.022
##     ah               -0.029    0.031   -0.943    0.346   -0.014   -0.014
##     ai               -0.048    0.023   -2.066    0.039   -0.038   -0.038
##     aj                0.014    0.034    0.406    0.685    0.006    0.006
##     ak               -0.019    0.018   -1.009    0.313   -0.014   -0.014
##     al                0.039    0.025    1.564    0.118    0.029    0.029
##     bm                0.087    0.073    1.204    0.229    0.048    0.048
##     bn               -0.010    0.037   -0.260    0.795   -0.007   -0.007
##     bo                0.122    0.041    2.964    0.003    0.072    0.072
##     bp                0.015    0.049    0.307    0.759    0.007    0.007
##     bq                0.055    0.031    1.787    0.074    0.043    0.043
##     br               -0.102    0.066   -1.561    0.118   -0.043   -0.043
##     bs                0.204    0.046    4.425    0.000    0.155    0.155
##     bt               -0.013    0.032   -0.410    0.682   -0.010   -0.010
##     ce                0.099    0.033    2.945    0.003    0.152    0.152
##     cf                0.002    0.013    0.130    0.896    0.003    0.003
##     cg                0.022    0.012    1.860    0.063    0.037    0.037
##     ch                0.017    0.017    1.003    0.316    0.023    0.023
##     ci                0.029    0.013    2.212    0.027    0.063    0.063
##     cj               -0.008    0.021   -0.394    0.694   -0.010   -0.010
##     ck                0.011    0.010    1.134    0.257    0.024    0.024
##     cl               -0.023    0.012   -1.972    0.049   -0.048   -0.048
##     dm                0.030    0.024    1.217    0.223    0.045    0.045
##     dn               -0.003    0.012   -0.260    0.795   -0.007   -0.007
##     do                0.041    0.014    2.932    0.003    0.068    0.068
##     dp                0.005    0.017    0.308    0.758    0.007    0.007
##     dq                0.019    0.010    1.777    0.076    0.040    0.040
##     dr               -0.035    0.022   -1.564    0.118   -0.041   -0.041
##     ds                0.069    0.015    4.475    0.000    0.145    0.145
##     dt               -0.004    0.011   -0.413    0.679   -0.009   -0.009
##     TE1C             -0.253    0.181   -1.395    0.163   -0.140   -0.140
##     TE2C              0.010    0.096    0.101    0.920    0.007    0.007
##     TE3C             -0.042    0.111   -0.377    0.706   -0.025   -0.025
##     TE4C             -0.093    0.152   -0.613    0.540   -0.045   -0.045
##     TE5C              0.001    0.089    0.011    0.992    0.001    0.001
##     TE6C              0.108    0.163    0.662    0.508    0.046    0.046
##     TE7C              0.337    0.088    3.845    0.000    0.256    0.256
##     TE8C             -0.006    0.096   -0.057    0.954   -0.004   -0.004
##     TE1R              0.091    0.059    1.556    0.120    0.141    0.141
##     TE2R             -0.012    0.038   -0.326    0.745   -0.025   -0.025
##     TE3R              0.029    0.035    0.841    0.400    0.049    0.049
##     TE4R              0.060    0.058    1.042    0.297    0.082    0.082
##     TE5R              0.031    0.030    1.031    0.303    0.067    0.067
##     TE6R              0.034    0.065    0.524    0.600    0.040    0.040
##     TE7R              0.120    0.032    3.700    0.000    0.254    0.254
##     TE8R             -0.018    0.029   -0.614    0.539   -0.037   -0.037
##     TIE1C            -0.077    0.086   -0.887    0.375   -0.043   -0.043
##     TIE2C            -0.012    0.031   -0.401    0.689   -0.009   -0.009
##     TIE3C             0.085    0.039    2.151    0.032    0.051    0.051
##     TIE4C            -0.014    0.047   -0.291    0.771   -0.007   -0.007
##     TIE5C             0.007    0.032    0.220    0.826    0.006    0.006
##     TIE6C            -0.089    0.057   -1.568    0.117   -0.037   -0.037
##     TIE7C             0.185    0.043    4.282    0.000    0.141    0.141
##     TIE8C             0.026    0.035    0.725    0.468    0.019    0.019
##     TIE1R             0.128    0.049    2.625    0.009    0.197    0.197
##     TIE2R            -0.002    0.022   -0.068    0.946   -0.003   -0.003
##     TIE3R             0.063    0.021    3.016    0.003    0.104    0.104
##     TIE4R             0.022    0.028    0.790    0.429    0.030    0.030
##     TIE5R             0.047    0.019    2.448    0.014    0.103    0.103
##     TIE6R            -0.043    0.037   -1.159    0.246   -0.050   -0.050
##     TIE7R             0.080    0.020    3.940    0.000    0.169    0.169
##     TIE8R            -0.028    0.018   -1.515    0.130   -0.057   -0.057
  • Latent Variables standardized loading : indicates the variance in the item explained through the contracts all are above 0.6 .

Global fit :

Sum_fit = summary(fit1, fit.measures=TRUE,standardized=TRUE)
Sum_fit$fit[c("chisq","df","rmsea","cfi")]
##        chisq           df        rmsea          cfi 
## 700.45462839 399.00000000   0.03696255   0.97374673

Before examining the individual parameters, it is important to first assess the overall goodness of fit for the model. There are several criteria we can use to evaluate the global fit:

  1. Chi-squared test: A low Chi-squared value, considering the degrees of freedom, indicates a good fit. The ratio of the Chi-squared value to the degrees of freedom should be below 5 for samples up to 1000. In this case, with 553 observations, we calculate the scaled measure as 632.247 / 399 = 1.584579 and the standard measure as 700.455 / 399 = 1.755526. Both ratios are below 5, indicating a good fit according to the Chi-squared test.

  2. Root Mean Square Error of Approximation (RMSEA): A RMSEA value below 0.05 indicates a good model fit. In this case, we have an RMSEA of 0.037 and a Robust RMSEA of 0.034, which are both below the 0.05 threshold, suggesting that the model has a good fit based on RMSEA.

  3. Comparative Fit Index (CFI): A CFI value above 0.95 indicates a good model fit. In this case, we have a CFI of 0.974, which is above the threshold, indicating a good fit based on CFI.

Given that the global fit measures meet the criteria for goodness of fit, we can conclude that the model is well-fitted. With a well-fitted model, we can proceed to analyze the individual parameter estimates with confidence.

modificationindices(fit1) %>%filter(mi>10)
##             lhs op          rhs     mi    epc sepc.lv sepc.all sepc.nox
## 1    atmosphere =~         Im12 11.167 -0.115  -0.145   -0.128   -0.128
## 2    atmosphere =~       C_REP1 21.773  0.091   0.115    0.157    0.157
## 3         brand =~        C_CR4 11.166  0.305   0.214    0.108    0.108
## 4       variety =~         Im20 14.302 -0.150  -0.194   -0.130   -0.130
## 5       variety =~         Im12 10.903 -0.113  -0.147   -0.129   -0.129
## 6       variety =~         Im13 13.663  0.133   0.172    0.143    0.143
## 7       variety =~       C_REP1 10.720  0.056   0.073    0.100    0.100
## 8       cuisine =~         Im11 12.873  0.216   0.175    0.153    0.153
## 9          mode =~         Im12 17.448 -0.179  -0.215   -0.190   -0.190
## 10         mode =~         Im13 23.644  0.217   0.262    0.218    0.218
## 11 satisfaction =~       COM_A3 12.402  0.212   0.187    0.115    0.115
## 12   commitment =~         Im11 13.312  0.142   0.163    0.142    0.142
## 13   commitment =~       C_REP1 56.860  0.174   0.199    0.272    0.272
## 14   commitment =~       C_REP2 15.936 -0.079  -0.091   -0.146   -0.146
## 15   commitment =~       C_REP3 12.246 -0.065  -0.075   -0.135   -0.135
## 16   repurchase =~        SAT_2 15.148  0.256   0.153    0.152    0.152
## 17   repurchase =~       COM_A1 11.069  0.291   0.174    0.121    0.121
## 18         Im21 ~~         Im22 14.074 -0.239  -0.239   -0.381   -0.381
## 19         Im21 ~~       C_REP3 11.210  0.052   0.052    0.172    0.172
## 20         Im11 ~~         Im12 13.834  0.146   0.146    0.291    0.291
## 21         Im11 ~~         Im13 19.981 -0.183  -0.183   -0.324   -0.324
## 22         Im13 ~~          Im1 10.522  0.067   0.067    0.356    0.356
## 23          Im1 ~~          Im2 37.374 14.223  14.223   86.297   86.297
## 24         Im10 ~~         Im14 37.228 97.849  97.849 1110.833 1110.833
## 25          Im6 ~~          Im7 37.383 10.888  10.888   40.086   40.086
## 26         Im16 ~~         Im19 37.373  3.054   3.054    6.702    6.702
## 27       COM_A1 ~~       COM_A2 24.605  0.250   0.250    0.326    0.326
## 28       COM_A3 ~~       C_REP1 13.001  0.077   0.077    0.194    0.194
## 29       C_REP1 ~~       C_REP3 18.366 -0.056  -0.056   -0.368   -0.368
## 30       C_REP2 ~~       C_REP3 53.856  0.103   0.103    1.250    1.250
## 31 satisfaction ~~   commitment 37.384  0.215   0.346    0.346    0.346
## 32 satisfaction  ~   cocreation 37.385  0.457   0.859    0.859    0.859
## 33 satisfaction  ~   repurchase 37.384  1.354   0.915    0.915    0.915
## 34 satisfaction  ~   commitment 37.384  0.250   0.324    0.324    0.324
## 35   commitment  ~   cocreation 37.383 -1.343  -1.946   -1.946   -1.946
## 36   commitment  ~   repurchase 37.384  2.236   1.166    1.166    1.166
## 37   commitment  ~ satisfaction 37.384  0.480   0.370    0.370    0.370

Local Fit Measures

#Local Fit

std.loadings <- inspect(fit1, what="std")$lambda
check = std.loadings
check[check > 0] <- 1
std.loadings[std.loadings == 0] <- NA
std.loadings2 <- std.loadings^2
std.theta <- inspect(fit1, what="std")$theta

#Individual item Reliability
IIR=std.loadings2/(colSums(std.theta) + std.loadings2)
IIR
##        decrtn atmsph brand varity cuisin france  mode prfssn stsfct cmmtmn
## Im3     0.877     NA    NA     NA     NA     NA    NA     NA     NA     NA
## Im4     0.940     NA    NA     NA     NA     NA    NA     NA     NA     NA
## Im5     0.578     NA    NA     NA     NA     NA    NA     NA     NA     NA
## Im20       NA  0.712    NA     NA     NA     NA    NA     NA     NA     NA
## Im21       NA  0.623    NA     NA     NA     NA    NA     NA     NA     NA
## Im22       NA  0.761    NA     NA     NA     NA    NA     NA     NA     NA
## Im11       NA     NA 0.375     NA     NA     NA    NA     NA     NA     NA
## Im12       NA     NA 0.760     NA     NA     NA    NA     NA     NA     NA
## Im13       NA     NA 0.731     NA     NA     NA    NA     NA     NA     NA
## Im1        NA     NA    NA  0.949     NA     NA    NA     NA     NA     NA
## Im2        NA     NA    NA  0.817     NA     NA    NA     NA     NA     NA
## Im10       NA     NA    NA     NA  0.847     NA    NA     NA     NA     NA
## Im14       NA     NA    NA     NA  0.912     NA    NA     NA     NA     NA
## Im6        NA     NA    NA     NA     NA  0.676    NA     NA     NA     NA
## Im7        NA     NA    NA     NA     NA  0.892    NA     NA     NA     NA
## Im17       NA     NA    NA     NA     NA     NA 0.940     NA     NA     NA
## Im18       NA     NA    NA     NA     NA     NA 0.732     NA     NA     NA
## Im16       NA     NA    NA     NA     NA     NA    NA  0.584     NA     NA
## Im19       NA     NA    NA     NA     NA     NA    NA  0.727     NA     NA
## SAT_1      NA     NA    NA     NA     NA     NA    NA     NA  0.748     NA
## SAT_2      NA     NA    NA     NA     NA     NA    NA     NA  0.671     NA
## SAT_3      NA     NA    NA     NA     NA     NA    NA     NA  0.390     NA
## COM_A1     NA     NA    NA     NA     NA     NA    NA     NA     NA  0.634
## COM_A2     NA     NA    NA     NA     NA     NA    NA     NA     NA  0.698
## COM_A3     NA     NA    NA     NA     NA     NA    NA     NA     NA  0.667
## COM_A4     NA     NA    NA     NA     NA     NA    NA     NA     NA  0.710
## C_CR1      NA     NA    NA     NA     NA     NA    NA     NA     NA     NA
## C_CR3      NA     NA    NA     NA     NA     NA    NA     NA     NA     NA
## C_CR4      NA     NA    NA     NA     NA     NA    NA     NA     NA     NA
## C_REP1     NA     NA    NA     NA     NA     NA    NA     NA     NA     NA
## C_REP2     NA     NA    NA     NA     NA     NA    NA     NA     NA     NA
## C_REP3     NA     NA    NA     NA     NA     NA    NA     NA     NA     NA
##        cocrtn rprchs
## Im3        NA     NA
## Im4        NA     NA
## Im5        NA     NA
## Im20       NA     NA
## Im21       NA     NA
## Im22       NA     NA
## Im11       NA     NA
## Im12       NA     NA
## Im13       NA     NA
## Im1        NA     NA
## Im2        NA     NA
## Im10       NA     NA
## Im14       NA     NA
## Im6        NA     NA
## Im7        NA     NA
## Im17       NA     NA
## Im18       NA     NA
## Im16       NA     NA
## Im19       NA     NA
## SAT_1      NA     NA
## SAT_2      NA     NA
## SAT_3      NA     NA
## COM_A1     NA     NA
## COM_A2     NA     NA
## COM_A3     NA     NA
## COM_A4     NA     NA
## C_CR1   0.724     NA
## C_CR3   0.682     NA
## C_CR4   0.649     NA
## C_REP1     NA  0.666
## C_REP2     NA  0.867
## C_REP3     NA  0.572
  • All of them greater than 0.4 except for SAT_3 -SATISFACTION - How satisfied are you with Galeries Lafayette Berlin? suggest that the item is a weak indicator.
#Composite/Construct Reliability
sum.std.loadings<-colSums(std.loadings, na.rm=TRUE)^2
sum.std.theta<-rowSums(std.theta)
sum.std.theta=check*sum.std.theta
CR=sum.std.loadings/(sum.std.loadings+colSums(sum.std.theta))
CR
##      decoration      atmosphere           brand         variety         cuisine 
##       0.9215577       0.8740995       0.8285131       0.9379212       0.9360530 
##          france            mode professionalism    satisfaction      commitment 
##       0.8785003       0.9103502       0.7914392       0.8173149       0.8934602 
##      cocreation      repurchase 
##       0.8670791       0.8749194
  • Should be above 0.6 ,indicating that the items are reliable and consistently measuring the underlying construct.
#Average Variance Extracted 
std.loadings<- inspect(fit1, what="std")$lambda
std.loadings <- std.loadings^2
AVE=colSums(std.loadings)/(colSums(sum.std.theta)+colSums(std.loadings))
AVE
##      decoration      atmosphere           brand         variety         cuisine 
##       0.7983096       0.6986333       0.6223003       0.8832435       0.8798287 
##          france            mode professionalism    satisfaction      commitment 
##       0.7841304       0.8359891       0.6555353       0.6029677       0.6771638 
##      cocreation      repurchase 
##       0.6850897       0.7014341
  • Should be higher than 0.5 , it is a measure of the amount of variance captured by a latent variable relative to the total variance due to measurement error
# Discriminant Validity 
std_fit1=inspect(fit1, "std")
std_fit1$psi^2
##                 decrtn atmsph brand varity cuisin france  mode prfssn stsfct
## decoration       1.000                                                      
## atmosphere       0.218  1.000                                               
## brand            0.221  0.175 1.000                                         
## variety          0.195  0.200 0.228  1.000                                  
## cuisine          0.174  0.087 0.204  0.097  1.000                           
## france           0.115  0.111 0.093  0.052  0.345  1.000                    
## mode             0.266  0.266 0.319  0.271  0.106  0.107 1.000              
## professionalism  0.429  0.227 0.282  0.362  0.249  0.137 0.362  1.000       
## satisfaction     0.000  0.000 0.000  0.000  0.000  0.000 0.000  0.000  0.332
## commitment       0.000  0.000 0.000  0.000  0.000  0.000 0.000  0.000  0.000
## cocreation       0.000  0.000 0.000  0.000  0.000  0.000 0.000  0.000  0.000
## repurchase       0.000  0.000 0.000  0.000  0.000  0.000 0.000  0.000  0.000
##                 cmmtmn cocrtn rprchs
## decoration                          
## atmosphere                          
## brand                               
## variety                             
## cuisine                             
## france                              
## mode                                
## professionalism                     
## satisfaction                        
## commitment       0.435              
## cocreation       0.000  0.688       
## repurchase       0.000  0.000  0.445
  • It is important to compare these values to the AVE values to assess the discriminant validity of the measurement model. Discriminant validity is established when the AVE value for each latent variable is greater than the squared correlation between that latent variable and any other latent variable in the model. For example: For decoration and atmosphere, the squared correlation is 0.218 < AVE decoration and atmosphere are 0.798 and 0.699, respectively, discriminant validity is established between these two latent variables.

Plot the results:

semPaths(fit1, nCharNodes = 0, style = "lisrel", rotation = 2)

Are the mechanism driving satisfaction and affective commitment similar?

Indeed, the mechanisms driving satisfaction and affective commitment are not similar. Here’s a summary of the differences:

Mechanisms driving satisfaction:

  1. Professionalism (e): Positive coefficient indicates that it has a positive impact on satisfaction.

  2. Variety (i): Positive coefficient indicates that it has a positive impact on satisfaction.

  3. Decoration (l): Negative coefficient indicates that it has a negative impact on satisfaction.

Mechanisms driving affective commitment:

  1. France (o): Positive coefficient indicates that it has a positive impact on affective commitment.

  2. Atmosphere (s): Positive coefficient indicates that it has a positive impact on affective commitment.

As we can see, the factors affecting satisfaction and affective commitment are different, and their respective impacts on these outcomes also vary. This highlights the distinct mechanisms driving satisfaction and affective commitment in Galeries Lafayetteperceived.


On the other hand, we can observe that satisfaction has a negative coefficient on co-creation, which implies that the more satisfied clients are, the less likely they are to participate in campaigns to improve Galeries Lafayette. In contrast, affective commitment has a positive relationship with co-creation, meaning that customers with higher affective commitment are more likely to participate in such campaigns.

Additionally, both affective commitment and satisfaction have a positive effect on repurchase intention from Galeries Lafayette. This indicates that clients who are more satisfied and have a stronger emotional attachment to the brand are more likely to return for future purchases.

Are satisfaction and affective commitment mediating the impact of image perceptions on outcomes? If yes for which outcomes?

In the regression analysis, we find that none of the eight factors have a direct effect on co-creation or repurchase intentions, as none of them show significant p-values:

  • Co-creation ~ all eight factors

  • Repurchase ~ all eight factors

However, we do observe indirect effects where satisfaction and affective commitment mediate the impact of image perceptions on these outcomes. We can identify significant indirect effects by examining the defined parameters:

  1. For ‘a’ (satisfaction on co-creation ), we have:
  • ae: ‘a’ represents satisfaction (in the co-creation regression) and ‘e’ for professionalism, satisfaction mediates the impact of professionalism on co-creation

    • Professionalism → Satisfaction ( mediator )→ Co-creation
  • ai: ‘a’ represents satisfaction and ‘i’ for variety, satisfaction mediates the impact of variety on co-creation

    • Variety → Satisfaction ( mediator ) → Co-creation

This relationship is significant, meaning that higher professionalism leads to increased customer satisfaction, which in turn results in lower co-creation (since customer satisfaction has a negative impact on co-creation, -0.357). The same reasoning applies to variety.

  1. For ‘b’ (affective commitment on Co-creation ), we have:
  • bo : ‘b’ represents Commitment and ‘o’ represents France .

    • France → Affective Commitmen( mediator ) → Co-creation
  • bs: ‘b’ represents Commitment and ‘s’ for atmosphere.

    • Atmosphere → Affective Commitmen( mediator ) → Co-creation

This relationship is significant, meaning that higher presence of French culture leads to increased customer affective commitment, which in turn results in higher co-creation . The same reasoning applies to atmosphere.

  1. For ‘c’ (satisfaction on repurchase intention), we have:
  • ce: Professionalism → Satisfaction ( mediator ) → Repurchase Intention
  • ci: Variety → Satisfaction ( mediator ) → Repurchase Intention
  • cl: Decoration→ Satisfaction ( mediator ) → Repurchase Intention

All three have significant indirect effects on repurchase intention through satisfaction, with professionalism and variety having positive estimates, while decoration has a negative estimate.

  1. For ‘d’ (commitment on repurchase intention), we have:
  • do: France → Affective Commitmen( mediator ) → Repurchase Intention

  • ds: Atmosphere→ Affective Commitmen( mediator ) → Repurchase Intention

Both have significant indirect effects on repurchase intention through commitment, resulting in positive impacts on repurchase intention.

Question 3

What is driving the two distinct outcomes? Which image dimensions have the largest total effect on each of them?

Two distinct outcomes, co-creation and repurchase intention, are driven by different factors. Both satisfaction and affective commitment influence repurchase intention and co-creation, although their magnitudes differ.

# Total effects ( := TE)

# for cocreation

TE7C:= aa + (a*k) + (b*s)

# for repurchase

TE7R:= ii + (c*k) + (d*s)

# total indirect effect (:= TIE)

# for cocreation

TIE3C:= (a*g) + (b*o)

TIE7C:= (a*k) + (b*s)

# for repurchase

TIE1R:= (c*e) + (d*m)

TIE3R:= (c*g) + (d*o)

TIE5R:= (c*i) + (d*q)

TIE7R:= (c*k) + (d*s)

To identify which image dimensions have the largest total effect on each outcome, we can examine the defined parameter section. The total effects of TE7R and TE7C are the only significant ones, representing the image of atmosphere. Atmosphere has the largest total effect on repurchase intention and co-creation which are the most significant values.

For significant indirect effects, we have the following:

  • TIE3C : Indirect effects of France on co-creation

  • TIE7C Image of atmosphere has indirect effect on co-creation (as expected, since its total effect is significant)

  • TIE1R: Professionalism’s indirect effect on repurchase intention

  • TIE3R : Indirect effects of France on repurchase intention

  • TIE5R : Indirect effects of variety on repurchase intention

  • TIE7R: atmosphere indirect effect on repurchase intention

In summary, atmosphere has the largest total effect on both commitment and repurchase intention. Additionally, significant indirect effects include France on co-creation and repurchase intention, variety on repurchase intention, and professionalism on repurchase intention.